Fereydoon Hamidi
********@*****.***
Summary:
Over 15+ years of experience in Database Administration and Developer for very large and complex databases SQL Server 2000 – 2019.
Programing language (VB .NET, C sharp .NET, FoxPro, PHP, Python, JAVA, ASP .net, HTML)
NoSQL database (Mongo DB, HBASE)
Hadoop Ecosystem
Expertise in Database Administration on Production Servers with server configuration, performance tuning and maintenance with outstanding troubleshooting capabilities.
SQL server HA (FCI, AG), Log shipping, Replication, Mirroring
Windows server Failover Clustering
Oracle virtual box
Strong technical background along with the ability to perform business analysis and write effective documentation and specifications.
Expertise in T-SQL writing stored procedures, triggers and functions.
Administering SQL Server High Availability and Disaster Recovery strategies.
Expert in data transformation services like SSIS, DTS, BCP and Bulk Insert.
Experience in Disaster Recovery Planning and Security Management operations.
Proficiency and expertise in SQL Server Replication, Backup/Recovery, Disaster recovery and planning.
Troubleshoot performance problems, fine-tuning of databases and index Analysis.
Maintained individual service accounts for SQL Server services on cluster with different Security groups to raise security mark.
Experience in Batch processes, Import, Export, Backup, Database Monitoring tools and Application support.
Data Warehouse (SSIS,SSAS,SSRS).
Power BI Suite.
ODI (Oracle Data Integrator)
Informatica
Microsoft Azure
EPIC Clarity
EPIC Caboodle
EBS
QlikView, PowerBuilder
Oracle's E-Business Suite
SQL Server Migration Assistant (SSMA) for Oracle .
Technical Education
BS Computer Engineering, South Tehran Azad University, Iran,1994
Microsoft Certified Professional (MCP)
MCSA (SQL Server 2012)
CompTia A+
CCNA
Epic Clarity
Technical Expertise
Databases:
MS SQL Server 2000-19, Oracle 8i, 9i (9.2.0.5, 9.2.0.6), 10g, 11g, MySQL, ODI, PostgreSQL,
Programming:
Python, PHP, R, Visual Basic, VB .NET, C, C++,C# .net, Java, JavaScript, Visual FoxPro .NET, Visual Studio 2019,
DBA:
database administration of MS SQL Server 2008, 2014
Operating System:
Windows 2008/2016 Advanced Server, NT & Windows 10, 7, Linux
BI Tools
Power BI, Power Apps, Microsoft Teams Apps, SharePoint, Power Automate, Tableau
Reporting Tool
SSRS, Jasper Studio, Crystal Report
ETL Tools
MS SQL SSIS, Python
Professional Experience
Title : Information Technology Specialist III
Client : Virginia Employment Commission
Duration : August 2022 to July 2025
Responsibilities
Design, develop, and maintain SSIS packages for data extraction, transformation, and loading (ETL) from multiple sources into the SQL Server data warehouse.
Support existing ETL processes through break-fix troubleshooting, debugging, and performance optimization.
Investigate and resolve data quality issues, ensuring accuracy, consistency, and reliability across data pipelines.
Partner with internal stakeholders to understand business requirements and deliver appropriate data solutions.
Collaborate with the broader Data Engineering team to ensure alignment on design standards, best practices, and version control using Azure DevOps/GitHub.
Contribute to the integration of new data sources, including clinical data, payer/claims data, HR/employee health data, and survey data.
Assist with data snapshot processes (e.g., daily Clarity data extracts) and prepare data for downstream reporting and analytics in Power BI.
Provide input into emerging technologies and platforms, such as AWS and Microsoft Fabric, for future data strategy initiatives.
Developing and supporting SSIS packages in a production environment.
Proficiency with SQL Server (queries, stored procedures, performance tuning).
Experience with data warehousing concepts, ETL workflows, and relational database design.
Familiarity with data quality investigation and remediation processes.
Strong troubleshooting and problem-solving skills.
Experience with Azure DevOps/GitHub for source control and release management.
Preferred:
Exposure to cloud platforms (AWS, Microsoft Fabric).
Knowledge of Power BI reporting and integration with data warehouses.
Experience with healthcare data (Epic Clarity, payer/claims, HR/employee health data).
Ddesigning developing, and supporting enterprise data warehouse and business intelligence solutions. Required .
Working with ETL/ELT tools such as SQL Server Integration Services (SSIS) and Azure Data Factory.
Working with cloud-based data platforms such as Azure Synapse Analytics or Microsoft Fabric.
Creating SQL complex queries, stored procedures, views, and optimizing performance for large datasets.
Building and maintaining star schema data models, fact and dimension tables, and metadata documentation.
Working with data visualization and reporting tools, especially Power BI Service.
Understanding of data governance, quality, and lineage principles.
Integrating data from diverse structured and semi-structured sources including relational databases, flat files, APIs, and cloud storage.
Working with source control and DevOps practices, including CI/CD pipelines for data integration workflows.
Strong analytical and troubleshooting skills for diagnosing data anomalies, ETL failures, and performance bottlenecks.
Ability to collaborate with business stakeholders to translate requirements into technical specifications and data pipelines.
Strong communication skills, both written and verbal, for documenting data models, workflows, and data contracts.
Working in Agile/Scrum or DevOps environments.
Microsoft certifications SQL Server certifications(MCSA) .
Title : Sr Data Migration Engineer
Client : University Of Pittsburgh Medical Center (Hillman Cancer Center)
Duration : August 2021 to Feb 2022
Responsibilities
Design, develop, and maintain SSIS packages for data extraction, transformation, and loading (ETL) from multiple sources into the SQL Server data warehouse.
Support existing ETL processes through break-fix troubleshooting, debugging, and performance optimization.
Investigate and resolve data quality issues, ensuring accuracy, consistency, and reliability across data pipelines.
Partner with internal stakeholders to understand business requirements and deliver appropriate data solutions.
Collaborate with the broader Data Engineering team to ensure alignment on design standards, best practices, and version control using Azure DevOps/GitHub.
Contribute to the integration of new data sources, including clinical data, payer/claims data, HR/employee health data, and survey data.
Assist with data snapshot processes (e.g., daily Clarity data extracts) and prepare data for downstream reporting and analytics in Power BI.
Provide input into emerging technologies and platforms, such as AWS and Microsoft Fabric, for future data strategy initiatives.
Developing and supporting SSIS packages in a production environment.
Proficiency with SQL Server (queries, stored procedures, performance tuning).
Experience with data warehousing concepts, ETL workflows, and relational database design.
Familiarity with data quality investigation and remediation processes.
Strong troubleshooting and problem-solving skills.
Experience with Azure DevOps/GitHub for source control and release management.
Preferred:
Exposure to cloud platforms (AWS, Microsoft Fabric).
Knowledge of Power BI reporting and integration with data warehouses.
Experience with healthcare data (Epic Clarity, payer/claims, HR/employee health data).
Creating and managing a requirements traceability matrix.
Data analysis and data profiling to discover patterns, meaningful relationships, anomalies and trends.
Documenting test cases, test steps and results.
testing and validating data for analytical systems, reports, integrations and user Experience.
Working on teams using a variety of Systems Development Life Cycle (SDLC) processes including Waterfall, Kanban, Agile, and Iterative.
Designing tests with automated tools.
Collaborate with product, engineering, and business stakeholders to understand data needs and deliver solutions.
Define and execute the data engineering roadmap in alignment with organizational strategy.
Oversee the design, development, and maintenance of scalable ETL/ELT pipelines and data workflows.
Partner with data scientists and business analysts to enable advanced analytics and AI/ML initiatives.
Establish best practices for data architecture, quality, security, and governance.
Optimize data processing for performance, reliability, and cost-efficiency.
Ensure compliance with data privacy and security standards.
Ddesigning developing, and supporting enterprise data warehouse and business intelligence solutions. Required .
Working with ETL/ELT tools such as SQL Server Integration Services (SSIS) and Azure Data Factory.
Working with cloud-based data platforms such as Azure Synapse Analytics or Microsoft Fabric.
Creating SQL complex queries, stored procedures, views, and optimizing performance for large datasets.
Building and maintaining star schema data models, fact and dimension tables, and metadata documentation.
Working with data visualization and reporting tools, especially Power BI Service.
Understanding of data governance, quality, and lineage principles.
Integrating data from diverse structured and semi-structured sources including relational databases, flat files, APIs, and cloud storage.
Working with source control and DevOps practices, including CI/CD pipelines for data integration workflows.
Strong analytical and troubleshooting skills for diagnosing data anomalies, ETL failures, and performance bottlenecks.
Ability to collaborate with business stakeholders to translate requirements into technical specifications and data pipelines.
Strong communication skills, both written and verbal, for documenting data models, workflows, and data contracts.
Working in Agile/Scrum or DevOps environments.
Microsoft certifications SQL Server certifications(MCSA) .
Title : Application Database / Datawarehouse Developer
Client : University of Florida (Urban Regional Planning)
Duration : September 2020 to September 2021
Responsibilities
Develop, maintain, and optimize data warehouse solutions to support business intelligence initiatives.
DASHBOARD DEVELOPMENT with POWER BI and power Apps .
Design and implement ETL processes using tools such as Informatica, SSIS, ADF to ensure data integrity and availability.
Write complex T-SQL queries for data extraction, transformation, and loading processes.
Utilize Python for scripting and automation of data processing tasks.
Create and manage dashboards and reports that provide insights into business performance metrics.
Collaborate with stakeholders to gather requirements and translate them into technical specifications.
Implement big data solutions leveraging Azure Data Lake for scalable data storage and processing.
Perform shell scripting for automation of routine tasks and processes.
Ensure best practices in data governance, security, and compliance are followed throughout the development lifecycle.
Develop, maintain, and optimize data warehouse solutions to support business intelligence initiatives.
Strong proficiency in T-SQL for database querying and manipulation.
Familiarity with big data technologies and cloud platforms, particularly Azure Data Lake.
Knowledge of scripting languages such as Shell Scripting .
Proficiency in programming with Python for data analysis and automation tasks.
Experience with Azure Databricks.
Design, develop, and maintain PowerApps applications to automate workflows and enhance operational processes.
Build, maintain, and optimize dashboards, reports, and data models using Power BI, Tableau, Qlik, or other visualization tools.
Collaborate with end users and technical resources to understand and integrate data outputs into visualizations effectively.
Experience working with SQL and relational database management systems
Experience in Cloud technologies ( Azure,AWS ) .
Experience in Databricks.
Programming and modifying code in languages like SQL, Python, and PySpark to support and implement Cloud based and on-prem data warehousing services.
Hands-on experience with dimensional data modeling, schema design, and data warehousing
Hands-on experience with troubleshooting data issues
Hands-on experience with various performance improvement techniques
Willingness to identify and implement process improvements, and best practices as well as ability to take ownership to work within a fast-paced, collaborative, and team-based support environment
Excellent oral and written communication skills.
Creating and managing a requirements traceability matrix.
Data analysis and data profiling to discover patterns, meaningful relationships, anomalies and trends.
Documenting test cases, test steps and results.
testing and validating data for analytical systems, reports, integrations and user Experience.
Working on teams using a variety of Systems Development Life Cycle (SDLC) processes including Waterfall, Kanban, Agile, and Iterative.
Designing tests with automated tools.
Ddesigning developing, and supporting enterprise data warehouse and business intelligence solutions. Required .
Working with ETL/ELT tools such as SQL Server Integration Services (SSIS) and Azure Data Factory.
Working with cloud-based data platforms such as Azure Synapse Analytics or Microsoft Fabric.
Creating SQL complex queries, stored procedures, views, and optimizing performance for large datasets.
Building and maintaining star schema data models, fact and dimension tables, and metadata documentation.
Working with data visualization and reporting tools, especially Power BI Service.
Understanding of data governance, quality, and lineage principles.
Integrating data from diverse structured and semi-structured sources including relational databases, flat files, APIs, and cloud storage.
Working with source control and DevOps practices, including CI/CD pipelines for data integration workflows.
Strong analytical and troubleshooting skills for diagnosing data anomalies, ETL failures, and performance bottlenecks.
Ability to collaborate with business stakeholders to translate requirements into technical specifications and data pipelines.
Strong communication skills, both written and verbal, for documenting data models, workflows, and data contracts.
Working in Agile/Scrum or DevOps environments.
Microsoft certifications SQL Server certifications(MCSA) .
Title : Senior Database Engineer
Client : University of Maryland (School Of Medicine) Baltimore, MD
Duration : November 2015 to Feb 2020
Responsibilities
Design, develop, and maintain SSIS packages for data extraction, transformation, and loading (ETL) from multiple sources into the SQL Server data warehouse.
Support existing ETL processes through break-fix troubleshooting, debugging, and performance optimization.
Investigate and resolve data quality issues, ensuring accuracy, consistency, and reliability across data pipelines.
Partner with internal stakeholders to understand business requirements and deliver appropriate data solutions.
Collaborate with the broader Data Engineering team to ensure alignment on design standards, best practices, and version control using Azure DevOps/GitHub.
Contribute to the integration of new data sources, including clinical data, payer/claims data, HR/employee health data, and survey data.
Assist with data snapshot processes (e.g., daily Clarity data extracts) and prepare data for downstream reporting and analytics in Power BI.
Provide input into emerging technologies and platforms, such as AWS and Microsoft Fabric, for future data strategy initiatives.
Developing and supporting SSIS packages in a production environment.
Proficiency with SQL Server (queries, stored procedures, performance tuning).
Experience with data warehousing concepts, ETL workflows, and relational database design.
Familiarity with data quality investigation and remediation processes.
Strong troubleshooting and problem-solving skills.
Experience with Azure DevOps/GitHub for source control and release management.
Preferred:
Exposure to cloud platforms (AWS, Microsoft Fabric).
Knowledge of Power BI reporting and integration with data warehouses.
Experience with healthcare data (Epic Clarity, payer/claims, HR/employee health data).
Creating and managing a requirements traceability matrix.
Data analysis and data profiling to discover patterns, meaningful relationships, anomalies and trends.
Documenting test cases, test steps and results.
testing and validating data for analytical systems, reports, integrations and user Experience.
Working on teams using a variety of Systems Development Life Cycle (SDLC) processes including Waterfall, Kanban, Agile, and Iterative.
Designing tests with automated tools.
Collaborate with product, engineering, and business stakeholders to understand data needs and deliver solutions.
Define and execute the data engineering roadmap in alignment with organizational strategy.
Oversee the design, development, and maintenance of scalable ETL/ELT pipelines and data workflows.
Partner with data scientists and business analysts to enable advanced analytics and AI/ML initiatives.
Establish best practices for data architecture, quality, security, and governance.
Optimize data processing for performance, reliability, and cost-efficiency.
Ensure compliance with data privacy and security standards.
Title : Senior IT Specialist
Client : Commonwealth Catholic Charities (Over than 300 client and state wide branch), Richmond, VA
Duration : November 2013 to May 2015
Responsibilities
Migrated SQL server 2008 to SQL Server 2012 in Microsoft Windows Server 2012 Enterprise Edition.
Extensively worked on SQL Server Integration Services (SSIS) and SQL Server Reporting Services (SSRS) and SSAS.
Create winform with VB .Net and C# .net .
Rebuilding / Monitoring the indexes at regular intervals for better performance.
Recovering the databases from backup in disasters. Implementing Point-In-Time Recovery solutions.
Involved in trouble shooting and fine-tuning of databases for its performance and concurrency.
Involved in Source Data Analysis, analysis and designing mappings for data extraction also responsible for Design and Development of SSIS Packages to load the Data from various Databases and Files.
Experience in using SQL Server Profiler, SQL Server Agent, and Database Engine Tuning Advisor(DTA).
Expert in implementing the snapshot isolation and DDL triggers.
Implemented Data partitioning, Error Handling through TRY-CATCH-THROW statement, Common Table Expression (CTE).
Coded Oracle PL-SQL packages and procedures to perform data loading, error handling and logging. Tied the procedures into an existing ETL process.
Environment: Oracle PL/SQL, SQL Server 2008/2012 Enterprise, Windows Server 2008, windows XP, Windows 7, .NET, Microsoft SharePoint, Crystal Report.
.