Post Job Free
Sign in

Data Engineer Azure Databricks

Location:
Union Center Four Corners, NJ, 07083
Salary:
Negotiable
Posted:
May 12, 2024

Contact this candidate

Resume:

Imtiaz Khan

ad5nq7@r.postjobfree.com +763-***-**** U.S. Citizen https://www.linkedin.com/in/imtiaz-khan-9a0a10230/

Senior Cloud Data Engineer

PROFESSIONAL SUMMARY:

Around 14+ years of IT experience as an Azure Cloud Data Engineer covering various Cloud components as well as Big Data framework technologies.

Experience as Azure Cloud Data Engineer in Microsoft Azure Cloud technologies including Azure Data Factory (ADF), Azure Data Lake Storage (ADLS), Azure Synapse Analytics (SQL Data warehouse), Azure SQL Database, Azure Analytical services, Azure Cosmos NoSQL DB, Azure Key vaults, Azure DevOps, Azure HDInsight, Big Data Technologies like Hadoop, Apache Spark, and Azure Data bricks, Kafka, Apache Solr, ELK, Cassandra.

Strong Experience in working with ETL Informatica (10.4/10.1/9.6/8.6/7.1.3) which includes components Informatica PowerCenter Designer, Workflow manager, Workflow monitor, Informatica server and Repository Manager.

Solid experience in Dimensional Data modeling, Star Schema/Snowflake modeling, FACT & Dimensions tables, Physical & logical data modeling, ERWIN 3.x, Oracle Designer, Data Integrator.

Experience in uploading data into AWS-S3 bucket using information amazonS3 plugin Hands on experience in tuning mappings, identifying and resolving performance bottlenecks in various levels like sources, targets, mappings, and sessions.

Proficient in Administrating Microsoft Azure Iaas/Paas services like Azure Virtual Machines (VMs), Virtual Network (VNET), Azure Storage, SQL Databases, Azure Active Directory (AAD), Monitoring, DNS, Autoscaling and Load Balancing.

Skilled in structuring cluster AutoScaler for Azure Kubernetes Service (AKS) using Terraform and worked with scheduling, deploying and managing pods and replicas in AKS.

Work experience in setting up alerts and deploying multiple dashboards for individual applications in Azure Kubernetes (AKS) clusters using tools like Prometheus and Grafana.

Expertise with Terraform templates for provisioning Infrastructure like Virtual Networks, Load Balancers, Storage Accounts, Virtual Machines, Virtual Machine Scale Sets, Azure Kubernetes Cluster (AKS), Key Vaults and Log Analytics Workspace in Microsoft Azure using Terraform modules.

Well versed with updating Azure Images in Azure Compute Galleries using Packer and update these Image references for Virtual Machines, Virtual Machine Scale Sets using Terraform across all environments.

Skilled in writing templates for Azure Infrastructure as code using Terraform to deploy Virtual Machines, OMS Agent extension on VMs, Log Analytics Workspace and Integrated Log Analytics with Azure VMs for monitoring the log files.

Realtime experience with loading data into AWS cloud, S3 bucket through informatica.

Experience Using AWS services like EC2, S3, DMS, Lambda, Cloudformation, DynamoDB

Created AWS data pipelines using Python, Pyspark, EMR services and steps functions.

Created Python code for Lambda functions perform necessary logic and derive the values.

Successfully migrated various on-prem databases to Azure cloud (Azure SQL DB).

Have good experience in loading data into Azure Data Lake (BLOB) and working with USQL as well as loading data with Azure Data Factory.

Hands on experience in Azure Development, worked on Azure web application, App services, Azure storage, Azure SQL Database, Virtual machines, Azure AD, Azure search, and notification hub.

Have experience in creating pipeline jobs, scheduling triggers, mapping data flows using Azure Data Factory and using Azure Key Vaults to store credentials.

Good programming knowledge with languages - Java, SQL, C#, Python and Scala with hands-on experience in implementing them for Hadoop/Spark based projects.

Involvement in Designing Azure Resource Manager Template and in designing custom build steps using PowerShell.

Experience working in reading Continuous JSON data from different source system into Databricks Delta and processing the files using Apache Structured streaming, PySpark and creating the files in parquet format.

Installation and configuration of Apache/Web logic on Solaris, Linux, and Windows.

Adept at working on various components in Hadoop ecosystem - HDFS, MapReduce, YARN, HBase, Hive, Spark, Yarn, Pig, Sqoop, Flume, Zookeeper, Kafka.

Configured kickstart servers for complete hands-free installation of workstations, with custom profiles, begin/finish scripts, and custom package suites/clusters.

Experience in migrating on premises to Windows Azure using Azure Site Recovery and Azure backups.

Experience in Performance Monitoring, Security, Trouble shooting, Backup, Disaster recovery, Maintenance and Support of Linux systems.

Architected complete scalable data pipelines, data warehouse for optimized data ingestion.

Conducted complex data analysis and report on results.

Daily routine Developer/DBA tasks like handling user’s permissions and space issues on Production and Semi-Production Servers and handling maintenance Jobs.

Constructed data staging layers and fast real-time systems to feed BI applications and machine learning algorithms.

Built Enterprise ingestion Spark framework to ingest data from different sources (s3, Salesforce, Excel, SFTP, FTP and JDBC Databases) which is 100% metadata driven and 100% code reuse which lets Junior developers to concentrate on core business logic rather spark/Scala coding.

Hands on experience with ticketing tools such as Remedy, VersionOne and ServiceNow.

Experience in writing basic Shell scripts using bash, Python, Perl, for process automation of databases, applications, backup, and scheduling.

Expertise in writing interactive/Ad-hoc queries using Presto for big data analysis.

Experience in building custom connectors using Presto to integrate different data sources like Oracle, DB2, Hive, MySQL, MongoDB, Postgres and so on.

Sound knowledge in all phases of Big Data implementations like data ingestion, processing, analytics, visualization, and warehousing.

Adept at implementing query optimization techniques such as partitioning, bucketing and vectorization in MapReduce jobs using Hive.

Experience in transferring data from RDBMS databases like Oracle and SQL Server to HDFS using HIVE and SQOOP.

Excellent problem solver possessing positive outlook with a blend of technical and managerial skills aimed to deliver the task before deadlines and always willing to learn at work to ensure team’s success.

TECHNICAL SKILLS

Operating Systems: Linux (Ubuntu, CentOS), Windows, Mac OS

ETL Tools: Informatica Power Center 10.4/10.1/9.6/8.6/7.1.3 MuleSoft, Informatica Power Exchange, Informatica data quality (IDQ). SFDC Data loader

Cloud applications: AWS, Azure, salesforce, Snowflake

Hadoop Ecosystem: Hadoop, MapReduce, Yarn, HDFS, Pig, Oozie, Zookeeper

Big Data Ecosystem: Spark, Spark SQL, Spark Streaming, Hive, Impala, Hue

Data Ingestion: Sqoop, Flume, NiFi, Kafka

NOSQL Databases: HBase, Cassandra, MongoDB

Programming Languages: C, Scala, Core Java, J2EE (SERVLETS, JSP, JDBC, JAVA BEANS, EJB) Frameworks: MVC, Struts, Spring, Hibernate

Web Technologies: HTML, CSS, XML, JavaScript, Maven

Scripting Languages: Java Script, UNIX, Python, R Language

Databases: Oracle 11g, MS-Access, MySQL, SQL-Server 2000/2005/2008/2012, Teradata

SQL Server Tools: SQL Server Management Studio, Enterprise Manager, Query Analyzer, Profiler, Export & Import (DTS).

IDE: Eclipse, Visual Studio, IDLE, IntelliJ

Web Services: Restful, SOAP

Tools: Bugzilla, Quick Test Pro (QTP) 9.2, Selenium, Quality Center, Test Link, TWS, SPSS, SAS, Documentum, Tableau, Mahout

Methodologies: Agile, UML, Design Patterns

PROFESSIONAL EXPERIENCE:

Cognizant - Hewlett Packard Enterprise (Union, New Jersey) Feb 2019 – Present

Manager, Data Engineer and Databricks consultant

Responsibilities:

Developed and maintained end-to-end operations of ETL data pipelines and worked with large data sets in Azure Data Factory.

Setting up Datalake in google cloud using Google cloud storage, Big Query, and Big Table.

Creating shell scripts to process the raw data, loading data to AWS S3, and Redshift databases.

Writing regression SQL to merge the validated Data into Prod environment.

Worked on data migration form On-prem servers to Cloud using Azure Data Factory and Sqoop.

Researched and implemented various cloud components like pipeline, activity, mapping data flows, data sets, linked services, Integration Run times, triggers, and control flow.

Performed data transformation using Azure Data Factory and Azure Databricks.

Have good experience working with Azure BLOB and Data Lake storage and loading data into Azure SQL Synapse analytics.

Proficient in Administrating Microsoft Azure Iaas/Paas services like Azure Virtual Machines (VMs), Virtual Network (VNET), Azure Storage, SQL Databases, Azure Active Directory (AAD), Monitoring, DNS, Autoscaling and Load Balancing.

Skilled in structuring cluster AutoScaler for Azure Kubernetes Service (AKS) using Terraform and worked with scheduling, deploying and managing pods and replicas in AKS.

Work experience in setting up alerts and deploying multiple dashboards for individual applications in Azure Kubernetes (AKS) clusters using tools like Prometheus and Grafana.

Expertise with Terraform templates for provisioning Infrastructure like Virtual Networks, Load Balancers, Storage Accounts, Virtual Machines, Virtual Machine Scale Sets, Azure Kubernetes Cluster (AKS), Key Vaults and Log Analytics Workspace in Microsoft Azure using Terraform modules.

Well versed with updating Azure Images in Azure Compute Galleries using Packer and update these Image references for Virtual Machines, Virtual Machine Scale Sets using Terraform across all environments.

Skilled in writing templates for Azure Infrastructure as code using Terraform to deploy Virtual Machines, OMS Agent extension on VMs, Log Analytics Workspace and Integrated Log Analytics with Azure VMs for monitoring the log files.

Created an enterprise data warehouse project (OVT) to provide standardized data definitions, values and reporting customers and transactions data for building blocks of Confidential business.

Designed and developed ETL process using Informatica 10.4 tool to load data from wide range of sources such as Oracle, flat files, salesforce, Aws cloud.

Based on the Business logic, developed various mappings & mapplets to load data from various sources using different transformations like Source Qualifier, Filter Expression, Lookup, Router, Update strategy, Sorter, Normalizer, Aggregator, Joiner, HTTP transformation, XML Transformations in the mapping.

Developed and supported Cox Integration projects (OVC/OVT/SFDC, ISSE), making sure data is following across multiple systems as per business need.

Created data pipelines using Python, PySpark and EMR services on AWS.

Created Glue-jobs to pull Dimensional tables and views data from OVT oracle database. Closely working with ET-AWS Analytics Team on implementation of ISSE project uploading Transaction and Revenue data into Salesforce cloud and AWS-S3 buckets.

Extracting and uploading data into AWS S3 buckets using Informatica AWS plugin.

Creating Salesforce run time reports as per business requirement.

Implemented Disaster Recovery and Failover servers in Cloud by replicating data across regions.

Used Azure BLOB to access required files and Azure Storage Queues to communicate between related processes.

Perform Data Cleaning, features scaling, features engineering using pandas and NumPy packages in python.

Figure out the most effective way to increase performance including hardware purchases, server configuration changes, or index query changes. Performance tuning of Presto clusters.

Automated jobs using different triggers (Event, Scheduled and Tumbling) in ADF.

Used Cosmos DB for storing catalog data and for event sourcing in order processing pipelines.

Designed and developed user defined functions, stored procedures, triggers for Cosmos DB

Hands on experience in analyzing Log files for Hadoop, LDAP, Presto and MongoDB finding root cause.

Created Linked service to land the data from SFTP location to Azure Data Lake.

Screen Hadoop/Presto cluster job performances, capacity planning, monitor connectivity and security.

General operational expertise such as good troubleshooting skills, understanding of system's capacity, bottlenecks, basics of memory, CPU, OS, storage, and networks.

Created Prometheus – Kibana dashboards for monitoring and migrated Splunk dashboards.

Experience in analyzing the log files for Hadoop and eco system services and finding out the root cause.

Created High level technical design documents and Application design documents as per the requirements and delivered clear, well-communicated and complete design documents.

Developed Databricks ETL pipelines using notebooks, Spark Data frames, SPARK SQL, and python scripting.

Used Python and Shell scripts to Automate ELT and Admin activities.

Worked on Proof of Concept for the RPM architecture redesign using C# .Net and SSMS.

Refactored stored procedures to build .NET CLR functions and registered them in SQL Server.

Extracted data from RDBMS and ingested to Hive using Sqoop and applied PySpark transformations.

Hands on experience on major components in Hadoop Ecosystem including HDFS and MR framework, YARN, Hive, Sqoop.

Collaborated with the PBI and Database teams to perform debugging, validating, and maintaining data quality.

Created an application to compare different versions of Power BI files.

Written C# code to extract the underlying JSON of PBIX file for Visual model comparison.

Written code for Data model comparison invoking the Microsoft Analysis services.

Created an PBI Execution engine using the C# .NET invoking Microsoft Analysis services and Query trace engine which can run the DAX queries without opening the Power BI file.

USAA (South Plainfield, New Jersey) Sep 2017 – Jan 2019

Data Engineer and Databricks consultant

Responsibilities:

Develop, design data models, data structures and ETL jobs for data acquisition and manipulation purposes.

Expert in developing JSON Scripts for deploying the Pipeline in Azure Data Factory (ADF) that process the data.

Experience in using Databricks with Azure Data Factory (ADF) to compute large volumes of data.

Performed ETL operations in Azure Databricks by connecting to different relational database source systems using JDBC connectors.

Setting up Datalake in google cloud using Google cloud storage, Big Query, and Big Table.

Creating shell scripts to process the raw data, loading data to AWS S3, and Redshift databases.

Writing regression SQL to merge the validated Data into Prod environment.

Developed Python scripts to do file validations in Databricks and automated the process using ADF.

Developed an automated process in Azure cloud which can ingest data daily from web service and load into Azure SQL DB.

Proficient in Administrating Microsoft Azure Iaas/Paas services like Azure Virtual Machines (VMs), Virtual Network (VNET), Azure Storage, SQL Databases, Azure Active Directory (AAD), Monitoring, DNS, Autoscaling and Load Balancing.

Skilled in structuring cluster AutoScaler for Azure Kubernetes Service (AKS) using Terraform and worked with scheduling, deploying and managing pods and replicas in AKS.

Work experience in setting up alerts and deploying multiple dashboards for individual applications in Azure Kubernetes (AKS) clusters using tools like Prometheus and Grafana.

Expertise with Terraform templates for provisioning Infrastructure like Virtual Networks, Load Balancers, Storage Accounts, Virtual Machines, Virtual Machine Scale Sets, Azure Kubernetes Cluster (AKS), Key Vaults and Log Analytics Workspace in Microsoft Azure using Terraform modules.

Well versed with updating Azure Images in Azure Compute Galleries using Packer and update these Image references for Virtual Machines, Virtual Machine Scale Sets using Terraform across all environments.

Skilled in writing templates for Azure Infrastructure as code using Terraform to deploy Virtual Machines, OMS Agent extension on VMs, Log Analytics Workspace and Integrated Log Analytics with Azure VMs for monitoring the log files.

Created an enterprise data warehouse project (OVT) to provide standardized data definitions, values and reporting customers and transactions data for building blocks of Confidential business.

Designed and developed ETL process using Informatica 10.4 tool to load data from wide range of sources such as Oracle, flat files, salesforce, Aws cloud.

Based on the Business logic, developed various mappings & mapplets to load data from various sources using different transformations like Source Qualifier, Filter Expression, Lookup, Router, Update strategy, Sorter, Normalizer, Aggregator, Joiner, HTTP transformation, XML Transformations in the mapping.

Developed and supported Cox Integration projects (OVC/OVT/SFDC, ISSE), making sure data is following across multiple systems as per business need.

Created data pipelines using Python, PySpark and EMR services on AWS.

Created Glue-jobs to pull Dimensional tables and views data from OVT oracle database. Closely working with ET-AWS Analytics Team on implementation of ISSE project uploading Transaction and Revenue data into Salesforce cloud and AWS-S3 buckets.

Extracting and uploading data into AWS S3 buckets using Informatica AWS plugin.

Creating Salesforce run time reports as per business requirement.

Implemented Disaster Recovery and Failover servers in Cloud by replicating data across regions.

Developed Streaming pipelines using Azure Event Hubs and Stream Analytics to analyze data for dealer efficiency and open table counts for data coming in from IOT enabled poker and other pit tables.

Analyzed data where it lives by Mounting Azure Data Lake and Blob to Databricks.

Developed Databricks ETL pipelines using notebooks, Spark Data frames, SPARK SQL, and python scripting.

Used Python and Shell scripts to Automate ELT and Admin activities.

Worked on Proof of Concept for the RPM architecture redesign using C# .Net and SSMS.

Refactored stored procedures to build .NET CLR functions and registered them in SQL Server.

Extracted data from RDBMS and ingested to Hive using Sqoop and applied PySpark transformations.

Hands on experience on major components in Hadoop Ecosystem including HDFS and MR framework, YARN, Hive, Sqoop.

Collaborated with the PBI and Database teams to perform debugging, validating, and maintaining data quality.

Created an application to compare different versions of Power BI files.

Written C# code to extract the underlying JSON of PBIX file for Visual model comparison.

Written code for Data model comparison invoking the Microsoft Analysis services.

Created an PBI Execution engine using the C# .NET invoking Microsoft Analysis services and Query trace engine which can run the DAX queries without opening the Power BI file.

Developed Databricks ETL pipelines using notebooks, Spark Data frames, SPARK SQL, and python scripting.

Used Python and Shell scripts to Automate ELT and Admin activities.

Worked on Proof of Concept for the RPM architecture redesign using C# .Net and SSMS.

Refactored stored procedures to build .NET CLR functions and registered them in SQL Server.

Extracted data from RDBMS and ingested to Hive using Sqoop and applied PySpark transformations.

Hands on experience on major components in Hadoop Ecosystem including HDFS and MR framework, YARN, Hive, Sqoop.

Collaborated with the PBI and Database teams to perform debugging, validating, and maintaining data quality.

Created an application to compare different versions of Power BI files.

Written C# code to extract the underlying JSON of PBIX file for Visual model comparison.

Written code for Data model comparison invoking the Microsoft Analysis services.

Created an PBI Execution engine using the C# .NET invoking Microsoft Analysis services and Query trace engine which can run the DAX queries without opening the Power BI file.

CVS (Minneapolis, Minnesota) Feb 2015 – Aug 2017

Data Engineer and Databricks consultant

Responsibilities:

Designed, deployed, scheduled, and executed Spark jobs written in Python on Hadoop Cluster deployed on Hortonworks 3 to process data.

Implemented clusters processing 250 TB of batch data each month and about 50GB of streaming data. Loaded this to data warehousing systems for further internal use.

Proficient in Administrating Microsoft Azure Iaas/Paas services like Azure Virtual Machines (VMs), Virtual Network (VNET), Azure Storage, SQL Databases, Azure Active Directory (AAD), Monitoring, DNS, Autoscaling and Load Balancing.

Skilled in structuring cluster AutoScaler for Azure Kubernetes Service (AKS) using Terraform and worked with scheduling, deploying, and managing pods and replicas in AKS.

Work experience in setting up alerts and deploying multiple dashboards for individual applications in Azure Kubernetes (AKS) clusters using tools like Prometheus and Grafana.

Expertise with Terraform templates for provisioning Infrastructure like Virtual Networks, Load Balancers, Storage Accounts, Virtual Machines, Virtual Machine Scale Sets, Azure Kubernetes Cluster (AKS), Key Vaults and Log Analytics Workspace in Microsoft Azure using Terraform modules.

Well versed with updating Azure Images in Azure Compute Galleries using Packer and update these Image references for Virtual Machines, Virtual Machine Scale Sets using Terraform across all environments.

Skilled in writing templates for Azure Infrastructure as code using Terraform to deploy Virtual Machines, OMS Agent extension on VMs, Log Analytics Workspace and Integrated Log Analytics with Azure VMs for monitoring the log files.

Used Hive to analyze data ingested into HBase by using Hive-HBase integration and compute various metrics for reporting on the dashboard.

Used Hive to perform transformations, event joins and some pre-aggregations before storing the data onto HDFS.

Spark Jobs are to be developed for all the admin and the leaderboard services.

Created Spark jobs for loading data into Redis.

Prepare detailed design documents for all the services and

Implementing and enhancing new features to our application.

Involved in loading and transforming large sets of structured, semi-structured and unstructured data and analyzed them using Spark.

Utilized accumulator variables, Broadcast variables, RDD caching for Spark Streaming.

Using Sqoop to extract the data back to relational database for business reporting.

Created HBase tables to store variable data formats of input data coming from different portfolios.

Configured Spark using Scala and utilized Data frames and Spark SQL API for faster processing of data.

Provided analysis reports to measure performance of data processing jobs after executing them on Microsoft Azure HDInsight.

Ingested data in mini-batches and performs RDD transformations on those mini-batches of data by using Spark Streaming to perform streaming analytics in Data bricks.

Designed and Implemented Partitioning (Static, Dynamic), Buckets in HIVE.

Utilized Partitioning, vectorization, and bucketing strategies to improve the processing times of long running MapReduce jobs.

Created shell scripts for scheduling system maintenance jobs on the cluster.

Responsible for gathering the business requirements to understand incoming data and load the enterprise data to HDFS.

Created an enterprise data warehouse project (OVT) to provide standardized data definitions, values and reporting customers and transactions data for building blocks of Confidential business.

Designed and developed ETL process using Informatica 10.4 tool to load data from wide range of sources such as Oracle, flat files, salesforce, Aws cloud.

Based on the Business logic, developed various mappings & mapplets to load data from various sources using different transformations like Source Qualifier, Filter Expression, Lookup, Router, Update strategy, Sorter, Normalizer, Aggregator, Joiner, HTTP transformation, XML Transformations in the mapping.

Developed and supported Cox Integration projects (OVC/OVT/SFDC, ISSE), making sure data is following across multiple systems as per business need.

Created data pipelines using Python, PySpark and EMR services on AWS.

Created Glue-jobs to pull Dimensional tables and views data from OVT oracle database. Closely working with ET-AWS Analytics Team on implementation of ISSE project uploading Transaction and Revenue data into Salesforce cloud and AWS-S3 buckets.

Extracting and uploading data into AWS S3 buckets using Informatica AWS plugin.

Creating Salesforce run time reports as per business requirement.

Implemented Disaster Recovery and Failover servers in Cloud by replicating data across regions.

Developed Databricks ETL pipelines using notebooks, Spark Data frames, SPARK SQL, and python scripting.

Used Python and Shell scripts to Automate ELT and Admin activities.

Worked on Proof of Concept for the RPM architecture redesign using C# .Net and SSMS.

Refactored stored procedures to build .NET CLR functions and registered them in SQL Server.

Extracted data from RDBMS and ingested to Hive using Sqoop and applied PySpark transformations.

Hands on experience on major components in Hadoop Ecosystem including HDFS and MR framework, YARN, Hive, Sqoop.

CISCO (Minneapolis, Minnesota) Jan 2010 – Jan 2015

Data Engineer

Responsibilities:

Involved in the implementation of design using vital phases of the Software development life cycle (SDLC) that includes Development, Testing, Implementation and Maintenance Support.

Strong experience creating real time data streaming solutions using Apache Spark Core, Spark SQL & Data Frames, Spark Streaming, Kafka.

Excellent understanding /knowledge on Hadoop (Gen-1 and Gen-2) and various components such as HDFS, Job Tracker, Task Tracker, Name Node, Data Node, Resource Manager (YARN).

Proficient at using Spark APIs to cleanse, explore, aggregate, transform, and Store, sale, customer, and stock data.

Hands-on experience with message brokers such as Apache Kafka.

Hands-on experience with systems-building languages such as Scala, Java.

Experience in writing UNIX shell scripts.

Involved in Requirement Analysis, Design, Development and Testing of the risk workflow system.

Developed stored procedures and Triggers in PL/SQL and Wrote SQL scripts to create and maintain the database, roles, users, tables, views, procedures, and triggers.

Used SQL queries to perform data validation and verify data integrity on Oracle 11g database.

Extensively used Core Java such as Multithreading, Exceptions, and Collections.

Generated server-side SQL scripts for data manipulation and validation and materialized views.

Created database access layer using JDBC and SQL stored procedures.

Worked on Java based connectivity of client requirement on JDBC connection.

Managing Backup Policies, Handling CRs for Backup and Restorations as well as User management and group policy management.

Involved in analyzing system failures, identifying root causes and recommended course of actions.

Worked on root cause analysis for all the issues that occur in production batch processes and provide the permanent fixes for the issues.

Licenses & certifications

Amazon Web Services DevOps Engineer - Professional

Amazon Web Services Solutions Architect - Professional

Google Cloud Certified Professional Data Engineer

Salesforce Certified

Education

Bachelor Of Science (Computer Information Systems)

Strayer University, VA, USA (2009)



Contact this candidate