Post Job Free

Resume

Sign in

Senior Big Data Engineer

Location:
Harrison, NY, 10528
Posted:
January 29, 2024

Contact this candidate

Resume:

DANIEL ANSONG

ad0f9l@r.postjobfree.com 516-***-****

Big Data Engineer ETL Developer Data Engineer Hadoop Developer

PROFILE SUMMARY-

•A result-oriented professional offering over 15 years of IT experience including 10+ years of experience in the development of Big Data Solutions with extensive experience as a Big Data Engineer, leveraging the power of cloud platforms to design and implement scalable and high-performance data processing pipelines.

•Leveraged Amazon EMR to run Apache Spark, Hive, and other Big Data workloads, optimizing cluster sizing for cost-efficiency while ensuring top-notch performance.

•Extensively employed Amazon S3 for efficient data storage, implementing lifecycle policies to transition data to more cost-effective storage classes like S3 Glacier.

•Effectively harnessed AWS Lambda (Python scripts) and Glue/Crawlers for serverless data processing, fine-tuning function execution times, memory allocation, and concurrency settings.

•Orchestrated intricate data workflows efficiently using AWS Step Functions, enhancing reliability and automation.

•Proficiently implemented real-time data streaming solutions with Amazon Kinesis, enabling timely data collection and analysis.

•Demonstrated expertise in managing Amazon DynamoDB, a highly scalable NoSQL database service.

•Established real-time monitoring and troubleshooting capabilities through AWS CloudWatch Alarms and Dashboards, ensuring optimal system performance.

•Implemented robust cloud security practices, covering encryption, access controls, and identity management, to safeguard sensitive data.

•Leveraged infrastructure as code (IAC) tools, AWS CloudFormation, for automated cloud resource provisioning.

•Proficient in optimizing data warehousing solutions with AWS Redshift, harnessing its capabilities for high-performance analytics.

•Demonstrated extensive experience in columnar storage and parallel processing for data efficiency and rapid query performance.

•Demonstrated expertise in cloud-based data warehousing using Snowflake connecting from Python and Java via JDBC, capitalizing on its multi-cluster, shared data architecture.

•Proficient in DBT (Data Build Tool) for transforming and modeling data, optimizing analytics pipelines.

•Leveraged Azure HDInsight for scalable Big Data processing, fine-tuning cluster configurations for optimal performance and cost efficiency.

•Effectively utilized Azure Blob Storage for data storage and lifecycle management, implementing data tiering strategies to achieve cost savings.

•Designed and deployed serverless data processing pipelines using Azure Functions, optimizing function performance and scalability.

•Orchestrated complex data workflows with Azure Data Factory, ensuring efficient data movement and transformation.

•Proficiently monitored and analyzed system performance using Azure Monitor and Azure Log Analytics, enabling proactive issue resolution.

•Implemented reliable and scalable event streaming solutions using Azure Event Hubs.

•Architected and implemented hybrid cloud solutions, seamlessly integrating on-premises and Azure-based data environments.

•Demonstrated a proven track record in data warehousing with Azure SynapseDB, seamlessly unifying big data and data warehousing.

•Utilized Google Cloud Dataproc for scalable and cost-effective Big Data processing, fine-tuning cluster configurations to optimize performance and cost.

•Efficiently managed data storage and retrieval using Google Cloud Storage, implementing lifecycle policies to ensure cost-effective data management.

•Developed serverless data processing pipelines with Google Cloud Functions, optimizing function execution times and resource allocation.

•Orchestrated data workflows efficiently through Google Cloud Composer, ensuring automation and reliability.

•Implemented robust security practices and compliance frameworks within GCP to protect sensitive data and meet industry standards.

•Proficiently harnessed GCP's monitoring and logging services, including Google Cloud Monitoring and Google Cloud Logging, for real-time system performance analysis and issue resolution.

•Proficient in implementing Pub/Sub messaging patterns for event-driven architectures and distributed systems, enabling scalable and asynchronous data processing.

•Demonstrated proficiency in containerization technologies like Docker and Kubernetes for deploying and managing Big Data applications in GCP.

•Designed and implemented hybrid cloud solutions, seamlessly integrating on-premises and Google Cloud-based data environments.

•Executed data migration strategies specific to GCP, including lift and shift, re-platforming, and re-factoring for seamless data transition.

•Applied GCP cost optimization techniques, including committed use contracts and preemptible instances, to maximize cost efficiency.

•Designed and implemented data ingestion pipelines using Apache NiFi to efficiently collect, route, and process data from various sources.

•Managed and administered on-premises Hadoop clusters, ensuring their availability, reliability, and optimal performance.

•Installed, configured, and maintained Hadoop ecosystem components, including HDFS, YARN, MapReduce, Hive, HBase, and others, for efficient data processing.

•Conducted capacity planning and cluster scaling to accommodate growing data volumes and workloads on on-premises Hadoop infrastructure.

•Proficient in containerization technologies like Docker and Kubernetes for deploying and managing Big Data applications in various cloud environments.

•Proven experience in cloud data migration strategies, encompassing lift and shift, re-platforming, and re-factoring, to ensure smooth data transition to the cloud.

•Possess strong knowledge of cloud cost optimization techniques, including resource scaling, reserved instances, and spot instances, to maximize cost efficiency.

•Actively contributed to Agile/Scrum processes, actively participating in Sprint Planning, backlog management, Sprint Retrospectives, and Requirements Gathering.

TECHNICAL SKILLS

Services and Tools

Platforms and Ecosystems

Hadoop Ecosystem

Hortonworks, Cloudera, Azure HDInsight, AWS EMR

Storage

HDFS, AWS S3, Azure Datalake, Google Cloud Storage

Languages

Scala, SQL, Bash scripting, Python, Java

Databases

PostgreSQL, Microsoft SQL Server, MySQL, Cosmos DB, Dynamo DB, MongoDB, AWS RDS, Redshift, Snowflake, Google BigQuery, DBT(Data Build Tool)

ETL Tools

Spark, Spark Structured Streaming, Sqoop, Azure Data Factory, Azure Databricks, SSIS, AWS EMR, AWS Glue,Google Cloud Dataflow, Google Cloud Dataproc, AWS Lambda

Visualization

Tableau, Power BI, Amazon Quicksight

Containerization

Kubernetes, Docker

Orchestration

Apache Airflow, Oozie

Streaming

Kafka, Azure Event Hubs, Amazon Kinesis, Google Cloud Pub/Sub

CI/CD

Jenkins, Azure DevOps, AWS CodePipeline, Teraform, Cloudformation,Git, Version Control

Monitoring

Grafana, CloudWatch, Spark UI, YARN logs

File Formats

JSON, CSV, Parquet, ORC, Avro

WORK EXPERIENCE

Sr. Big Data Engineer

Mastercard, Harrison, New York, Nov’21-Present

•Collaborated in an AWS data engineering project that utilizes Glue, Lambda, Step Functions, Python, and Java to build and end-to-end pipeline from migrating data to cloud.

•Use AWS Glue Crawlers to automatically discover and catalog metadata from source data stored in S3, RDS, or other data stores.

•Schedule Glue Crawlers to periodically update the data catalog.

•Implemented AWS Fully Managed Kafka streaming solutions for real-time data transfer to Spark clusters in AWS Databricks.

•Successfully migrated data from local SQL Servers to Amazon RDS and EMR HIVE, optimizing data management and enabling seamless data migration to the cloud.

•Utilized AWS Redshift and Redshift Spectrum for secure cloud-based data storage, ensuring scalability, accessibility, and facilitating data migration.

•Managed AWS resources, including EC2 instances and Hadoop Clusters, for optimal performance during the data migration process.

•Utilize PySpark for efficient data ingestion from various sources, including structured and unstructured financial data.

•Engineered and maintained a Hadoop Cloudera distributions cluster on AWS EC2, enhancing data processing capabilities and supporting data migration initiatives.

•Implement AWS Lambda functions in both Python and Java to perform specific tasks within the data pipeline.

•Examples include triggering Glue jobs, monitoring pipeline health, and sending notifications.

•Proficiently used Spark SQL and the DataFrames API for efficient data loading into Spark Clusters, including data migration projects.

•Develop ETL (Extract, Transform, Load) jobs using AWS Glue ETL jobs written in Python, Java and Scala.

•Utilize Glue's built-in transformations and custom scripts to clean, filter, and transform the data as needed.

•Define workflows that chain together Glue jobs, Lambda functions, and other AWS services.

•Demonstrated expertise in data manipulation and analysis using Python, Java, SQL, and Snowflake, essential skills for data migration and analysis.

•Leveraged Snowflake, Snowpipe, and Redshift Spectrum for efficient data processing and analysis, including data migration tasks.

•Leverage PySpark libraries to build scalable and high-performance data processing applications.

•Designed and implemented AWS Lambda functions for serverless data processing, optimizing function execution times, memory allocation, and concurrency settings, key components of data migration workflows.

•Orchestrated complex data workflows efficiently using AWS Step Functions, enhancing reliability and automation in data migration processes.

•Utilized AWS Glue for data extraction, transformation, and loading (ETL) processes, crucial for data migration projects.

•Managed end-to-end data collection, processing, and analysis with Kinesis services, supporting data migration activities.

•Implemented real-time data streaming solutions with Amazon Kinesis, enabling timely data collection, analysis, and migration.

•Proficiently handled Amazon DynamoDB, a highly scalable NoSQL database service, for data migration and storage needs.

•Demonstrated expertise in database modeling and design for DynamoDB, critical for data migration projects.

•Effectively utilized AWS CodePipeline for continuous integration and continuous deployment (CI/CD) in data migration workflows.

•Designed and optimized data warehousing solutions with AWS Redshift, harnessing its power for high-performance analytics and data migration tasks.

•Integrated AWS Redshift with various AWS data services, streamlining data workflows, including data migration pipelines.

•Implemented and optimized data transformations, aggregations, and analytics using functional programming principles in Scala.

•Demonstrated expertise in cloud-based data warehousing using Snowflake, utilizing its multi-cluster, shared data architecture for efficient data migration.

•Effectively separated storage and compute for scalability in Snowflake, supporting data migration and analysis.

•Leveraged AWS CloudWatch for real-time monitoring and troubleshooting capabilities during data migration processes.

•Utilized AWS CloudFormation for automated cloud resource provisioning, ensuring data migration environments are set up efficiently.

•Proficient in implementing data transformations with DBT (Data Build Tool), crucial for data migration and transformation projects.

Senior Data Engineer

Altria Group, Henrico County, Virginia, Feb’20-Nov’21

•Spearheaded the containerization of Confluent Kafka applications, optimizing communication between containers through subnet configuration

•Utilized Azure Blob Storage as a robust solution for efficient data collection and storage, ensuring seamless access to and processing of extensive datasets

•Orchestrated intricate data pipelines using Azure Logic Apps and Azure Functions, harnessing Azure Event Hubs for event-driven messaging

•Demonstrated proficiency in data cleaning and preprocessing, leveraging the capabilities of Azure Data Factory and developing transformation scripts in Python and Java to ensure data quality.

•Conducted real-time data analysis using Azure Stream Analytics and Azure HDInsight for Apache Kafka, enhancing data-driven insights.

•Collaborated closely with data scientists and analysts to harness the power of Azure Machine Learning, developing machine learning models for critical tasks such as fraud detection, risk assessment, and customer segmentation.

•Implemented effective data transformations with Azure SQL Data Warehouse for SQL processing and Azure Databricks for Python and Scala/Java processing, encompassing critical tasks such as data cleaning, normalization, and standardization.

•Seamlessly integrated Azure Virtual Machines, Azure Monitor, and Azure Resource Manager into various Azure projects, optimizing infrastructure and resource management.

•Harnessed the power of Azure Data Lake Storage and Azure Cosmos DB to load data into Spark data frames, enabling efficient in-memory data computation to generate rapid output responses.

•Ensured efficient monitoring of Azure SQL Database and Azure Virtual Machines through Azure Monitor, maintaining system health and performance

•Fine-tuned PySpark/Python jobs for optimal performance, considering factors like data partitioning, caching, and cluster utilization

•Developed and deployed automated Python scripts to convert data from diverse sources, creating robust ETL pipelines for streamlined data processing

•Established a resilient and scalable data lake infrastructure using Azure Data Lake Storage, catering to the storage and processing needs of extensive datasets

•Transformed SQL queries into Spark transformations using Spark RDDs, Python, and Scala, enhancing data processing efficiency

•Maintained comprehensive documentation of Scala code, data pipelines, and system architecture

•Designed and optimized data processing workflows, harnessing the capabilities of Azure services such as Azure HDInsight and Azure Event Hubs, enabling the efficient processing and analysis of vast datasets

•Orchestrated the creation and monitoring of scalable and high-performance computing clusters, effectively utilizing Azure Functions, Azure Databricks, and Azure Monitor

•Created automated ETL (Extract, Transform, Load) processes using PySpark and Python to ensure timely data updates

•Utilized Azure Synapse Analytics for accelerated data analysis, surpassing traditional Spark-based methods in terms of speed and efficiency

•Crafted streaming applications using Apache Spark Streaming and Azure Event Hubs, enhancing real-time data processing capabilities

•Leveraged Azure HDInsight to process Big Data across a Hadoop Cluster of virtual servers, while also utilizing Azure Synapse Analytics for data warehousing.

•Collaborated seamlessly with the DevOps team to deploy pipelines in higher environments, utilizing Azure DevOps for efficient deployment processes.

•Developed and implemented robust recovery plans and procedures, ensuring business continuity and data integrity.

•Efficiently executed Hadoop/Spark jobs on Azure HDInsight, with data and programs securely stored in Azure Data Lake Storage

Sr. Big Data Engineer

BlackRock, New York City, NY, Sep’18-Feb’20

•Successfully ingested data through Apache Flume, utilizing Kafka as the data source (customs producer and consumer using Python and Java) and HDFS as the data repository, ensuring seamless and efficient data flow within the Hadoop ecosystem

•Successfully ingested data through Google Cloud Dataflow, utilizing Pub/Sub as the data source and Google Cloud Storage as the data repository, ensuring seamless and efficient data flow within the Google Cloud ecosystem

•Expertly collected and aggregated extensive volumes of log data employing Google Cloud Dataflow, strategically staging the data in Google Cloud Storage for subsequent analysis and insights

•Transformed and optimized ETL processes to harness the full potential of Google Cloud Storage, significantly enhancing data processing capabilities and capitalizing on the scalability inherent in the Google Cloud platform

•Meticulously managed storage capacity, fine-tuning performance parameters, and conducted thorough benchmarking of Google Cloud Dataflow pipelines, resulting in optimized data processing and heightened overall system efficiency

•Seamlessly facilitated data transfer between the Google Cloud ecosystem and structured data storage in Google Cloud SQL using Dataflow, ensuring harmonious data integration and synchronization

•Streamlined the loading of data from external sources into Google Cloud Storage, fostering efficient data management practices within the Google Cloud environment

•Successfully installed and configured BigQuery, a robust data warehousing and SQL querying tool, and further extended its capabilities by developing customized BigQuery User-Defined Functions in Python(UDFs) tailored to meet specific business requirements.

•Diligently administered cluster coordination responsibilities through Google Cloud Composer (Apache Airflow), ensuring seamless coordination and synchronization among the nodes within the Google Cloud environment.

•Leveraged Dataflow to proficiently export data from external sources to Google Cloud Storage, guaranteeing smooth data transfer and seamless integration across diverse data sources

•Fine-tuned Scala and Spark jobs to achieve optimal performance and resource utilization

•Collaborated with the team to export analyzed data to Google Cloud SQL databases utilizing Dataflow, facilitating data reporting and analysis across multiple platforms.

•Designed and implemented workflows using Cloud Composer (Apache Airflow), automating the execution of data processing jobs and SQL queries, resulting in streamlined data processing tasks and heightened workflow efficiency.

•Maintained a proactive approach to staying updated on the latest Google Cloud technologies, industry trends, and cutting-edge applications, ensuring a continuous drive for innovation and excellence.

•Spearheaded a critical project that involved optimizing Google Cloud Dataflow pipeline performance, resulting in a significant reduction in data processing time. By fine-tuning pipeline resources, enhancing data storage strategies, and implementing more efficient data processing algorithms, achieved a 30% improvement in overall data processing efficiency, enabling faster access to crucial insights and analytics for the organization.

•Successfully planned and executed the migration of data from on-premises Hadoop Cloudera clusters to Google Cloud Storage and Google Cloud BigQuery, ensuring a seamless transition to the cloud environment.

•Designed and implemented custom data migration pipelines using Apache Nifi to efficiently transfer data from Hadoop Cloudera to Google Cloud Storage while maintaining data integrity and security.

•Utilized Google Cloud Dataprep to clean, transform, and prepare data for migration, ensuring data quality and consistency throughout the migration process.

•Conducted thorough testing and validation of data migration pipelines to verify data accuracy and completeness, minimizing potential data issues during migration.

ETL Developer

Next Level Insight LLC., Green Bay, Wisconsin, Jun’17 – Sep’18

•Proficiently extracted and profiled data from diverse sources, including Excel spreadsheets, flat files, XML files, relational databases, and data warehouses, utilizing the SQL Server Integration Services (SSIS) utility.

•Employed a range of data transformations within the Data Flow task to enhance the quality and usability of extracted data by configuring aggregations, modifying data types, sorting data, merging data from disparate sources, splitting data for various destinations, conducting lookups to fill missing data, and eliminating duplicate records.

•Transferred the meticulously transformed data to multiple destinations, including databases, data warehouses, flat files, and Excel spreadsheets, ensuring seamless data mapping between incoming fields and destination fields supporting real estate appraisal activities and fueled data analytics endeavors aimed at providing valuable insights for digital marketing companies.

•Leveraged Precedence Constraints, Event Handlers, and Error Output Configuration to effectively manage and address package runtime errors, ensuring the reliability and robustness of data processing workflows.

•Implemented package management strategies by defining Log Files, setting Break Points, configuring Data Viewers, and utilizing Checkpoints, enabling efficient monitoring and control of SSIS packages.

•Implement comprehensive logging and monitoring using AWS CloudWatch, CloudTrail, and Glue job logs.

•Use Glue or custom scripts to identify and handle data anomalies and Tune Glue job configurations for efficiency.

•Design an event-driven architecture where Lambda functions are triggered by various AWS services and events, such as S3 object uploads, DynamoDB table updates

•Create Lambda functions to enrich incoming data with additional information from external sources or APIs before processing it further.

•Implement custom data transformation logic within Lambda functions for tasks that cannot be easily achieved with Glue ETL jobs alone.

•Build robust error handling and retry mechanisms within Lambda functions to handle transient failures and ensure data processing reliability.

•Designed, developed, and maintained complex ETL processes using industry-standard tools and frameworks, Apache Nifi, to facilitate the extraction, transformation, and loading of large volumes of data from source systems to data warehouses or data lakes.

•Employed data profiling and quality assessment techniques to identify and rectify anomalies, inconsistencies, and data quality issues within source data, ensuring that only accurate and reliable data was processed and loaded.

•Created and optimized database objects, such as tables, views, and stored procedures, to support ETL processes and enable efficient data transformation and storage within the target environment.

•Implemented error handling and logging mechanisms within ETL workflows with Python and Java scripts to capture, report, and manage exceptions and anomalies, facilitating the identification and resolution of data integration issues.

•Worked with data modeling teams to align ETL processes with data warehouse schema designs, ensuring that data was transformed and loaded in accordance with established data structures and relationships.

•Utilized scheduling and automation tools, such as Apache Airflow or cron jobs, to orchestrate and automate ETL workflows, enabling regular, reliable, and timely data integration tasks.

•Conducted performance tuning and optimization of ETL processes, including SQL queries and transformations, to improve data processing efficiency, reduce processing times, and minimize resource consumption.

•Documented ETL workflows, data mappings, transformation rules, and operational procedures, ensuring the availability of comprehensive documentation for reference and auditing purposes.

Hadoop Administrator

Truist Bank, Charlotte, North Carolina, Aug’15 – Jun’17

•Managed highly diverse datasets, ranging from unstructured to structured data, within the Hadoop environment, ensuring efficient data processing and analysis capabilities.

•Significantly enhanced data processing speed and efficiency by seamlessly integrating and optimizing Hive, Sqoop, and Flume into existing ETL workflows, streamlining the extraction, transformation, and loading processes for extensive structured and unstructured data.

•Leveraged Hive to create a dynamic data warehousing solution, enabling in-depth client-based transit system analytics.

•Demonstrated expertise in working with various data formats, including JSON, XML, CSV, and ORC, and implemented advanced techniques such as Hive partitioning and bucketing for optimal data organization and retrieval.

•Managed the complete lifecycle of Hadoop clusters, including installation, node commissioning and decommissioning, high availability configuration, and capacity planning, ensuring the efficient operation of data nodes.

•Successfully executed cluster upgrades on the staging platform before implementing them on the production cluster, minimizing potential disruptions and ensuring system stability.

•Effectively utilized Cassandra for processing JSON-documented data and HBase for storing region-based data, demonstrating versatility in handling diverse data requirements.

•Oversaw Zookeeper configurations and ZNodes to ensure high availability within the Hadoop cluster, contributing to a reliable and fault-tolerant infrastructure.

•Implemented Apache Ranger to enforce access control lists (ACLs) and conduct audits on the Hadoop cluster, ensuring compliance with regulatory standards and data security protocols.

•Conducted comprehensive HDFS balancing and fine-tuning activities to optimize the performance of MapReduce applications, significantly improving data processing efficiency.

•Designed and executed a strategic data migration plan, enabling the seamless integration of additional data sources into the Hadoop ecosystem, resulting in centralized and unified data management.

•Streamlined Hadoop cluster setup and management processes through the proficient use of open-source configuration management and deployment tools such as Puppet, Java and Python.

•Implemented Kerberized authentication mechanisms to enhance cluster security, ensuring secure user access and authentication within the Hadoop environment.

•Tailored YARN Capacity and Fair schedulers to align with organizational requirements, effectively managing resource allocation and prioritizing job execution.

•Provided valuable insights into cluster capacity and growth planning, contributing to informed decisions regarding node configuration and resource allocation to accommodate future data needs.

•Optimized MapReduce counters to expedite data processing, enhancing the overall performance of data-intensive operations within the Hadoop environment.

•Played a pivotal role in designing robust backup and disaster recovery methodologies for Hadoop clusters and associated databases, ensuring data resilience and uninterrupted business operations.

•Expertly executed upgrades, patches, and fixes on the Hadoop cluster, implementing either rolling or express methods to minimize downtime and maintain system stability

Data Engineer

Springboard, San Francisco, United States, Sep’13-Aug’15

•Designed and implemented ETL pipelines using a combination of Data Factory, Synapse, Databricks, CosmosDB, and SQL Server, facilitating seamless data integration and transformation.

•Migrated Hive UDFs and queries to Spark SQL/Python, leveraging the power of Spark for faster data processing and analysis.

•Conducted benchmarking between Hive and HBase to identify the best approach for fast data ingestion.

•Configured Spark Streaming to receive real-time data from Apache Kafka and store the streamed data to HDFS using Scala, enabling real-time data processing.

•Wrote SQL scripts on the final database to prepare data for visualization with Tableau, enabling data-driven insights.

•Utilized Cloudera Manager for the installation and management of a multi-node Hadoop cluster, ensuring a scalable and reliable infrastructure.

•Developed and implemented custom Hive UDFs, focusing on date functions to efficiently query large datasets and optimize data retrieval.

•Utilized Shell scripts for orchestrating the execution of various scripts and managing data files within and outside of HDFS.

•Developed Python-based notebooks using Alteryx for automated weekly, monthly, and quarterly reporting ETL, streamlining data processing and analysis.

•Analyzed large datasets to determine optimal aggregation and reporting methods

•Installed and configured essential Hadoop components such as Hive, Pig, Sqoop, and Oozie on the Hadoop cluster, enabling efficient data processing and analysis

•Monitored Apache Airflow cluster and utilized Sqoop for importing data from Oracle to Hadoop, ensuring data availability and integrity.

•Programmed Flume and HiveQL scripts to effectively extract, transform, and load data into the database, ensuring data integrity and consistency

•Created Airflow Scheduling scripts in Python to automate data pipelines and data transfers, streamlining data workflows and ensuring timely data processing and delivery

PREVIOUS EXPERIENCE

Linux Administrator

Avnet Technologies, Phoenix, Arizona, Nov’09-Sep’13

•Install, configure, and maintain Linux operating systems on servers and workstations.

•Set up and managed user accounts, groups, and permissions.

•Ensured that systems are configured according to best practices and security guidelines.

•Monitor system performance, resource utilization, and capacity planning.

•Identify and resolve performance bottlenecks, system crashes, and other issues.

•Optimize system performance through kernel tuning and other performance-enhancing techniques.

•Implemented security measures to protect Linux systems from threats and vulnerabilities.

•Apply security patches and updates in a timely manner.

•Conduct regular security audits and compliance checks to ensure adherence to industry standards and Avnet's security policies.

•Set up and maintain backup and recovery procedures to safeguard critical data.

•Performed data backups and create recovery strategies to minimize downtime in case of system failures.

•Provided technical support to end-users and troubleshoot Linux-related issues.

•Developed and maintained shell scripts and automation tools for system administration tasks.

•Automated repetitive processes to improve



Contact this candidate