Post Job Free

Resume

Sign in

Forklift drive. Receive and loan and unloading trucks

Location:
Florida City, FL
Salary:
18
Posted:
December 05, 2023

Contact this candidate

Resume:

Last Updated: June **, ****

Candidate Relevancy FAQ’s

Employers who post jobs on ADP’s recruiting platforms may refer to an applicant’s Candidate Relevancy or Profile Relevancy score. Candidate Relevancy and Profile Relevancy rely on artificial intelligence and machine learning to provide an initial comparison of an applicant’s education, experience, and skills against the education, experience, and skills requirements in the job description. This is intended to be one of many factors that a potential employer will review in making its interview decisions; there are no cut-off scores and all applications remain visible to employers. Candidates who opt out will have their score listed as “Not available.” These FAQs provide additional information about the data these tools collect, store, and retain, and the results of the most recent impartial evaluations of these tools. 1. What is Candidate Relevancy?

ADP’s Candidate Relevancy and Profile Relevancy tools (for ease of reference both will jointly be referred to as “Candidate Relevancy” unless otherwise noted) use artificial intelligence and machine learning algorithms to conduct an initial review of an application, and are designed to be utilized by employers as one tool, among others, in the hiring process.1 Specifically, Candidate Relevancy conducts a mathematical assessment of how close the skills, education and/or experience on an applicant’s resume match the skills, education, and/or experience listed on the relevant job description. This process quantifies the “relevance” between the applicant’s resume and the job posting. The Candidate Relevancy model also leverages past decisions derived from millions of resumes and job descriptions where the selection decision is already known. The scores are intended to be used as one of many factors by an employer in determining who to advance to the next round in the hiring process. Candidate Relevancy is not intended to replace human judgment during any step of the recruitment process and is designed in such a way that there are no cut-off scores that would eliminate applicants from being visible to employers in the user interface. Employers are provided access to all applications, enabling them to make human decisions on which candidates to pursue. 2. How is the Candidate Relevancy score determined? The Candidate Relevancy model first parses the information concerning the education, experience, and skills contained in the applicant’s resume or application and in the relevant job description. This information is formatted to allow a mathematical assessment to be conducted of how close the applicant’s education, skills, and experience match those found in the relevant 1 The Candidate Relevancy score is displayed to employers using ADP’s Recruitment Management product, while the Profile Relevancy score is displayed to employers using ADP’s WorkforceNow Recruitment platform. job description. Candidate Relevancy does not extract or utilize the applicant’s name, address, race, ethnicity, gender or protected demographic information. Each job requisition is classified using a job and sector taxonomy. The Candidate Relevancy model creates three sub-scores indicating how close the applicant’s education, skills, and experience matches those found in the job description. The three scores are then weighted to create the Candidate Relevancy Score. The weights sum to 1 and reflect the relative importance of each component. Since the job descriptions do not define the importance of each component, the importance (i.e., the weights) must be estimated empirically from the data. Separate weights are created for each sector in which the open job resides. The weights are determined by a machine learning model.

The resulting weighted score (the final Candidate Relevancy score) is intended to be used by an employer as only one tool, among others, to aid in the selection of whom to interview or prioritize during the hiring pipeline.

3. What data does Candidate Relevancy collect and what are ADP’s retention policies regarding the information?

Type of Data Collected from Retention Policy

Resume data ADP WorkforceNow Recruitment or ADP

Recruitment Management

Three years

Job descriptions ADP WorkforceNow Recruitment or

ADP Recruitment Management

Three years

4. Is Candidate Relevancy an automated employment decision tool covered by New York City Local Law 144 (“the NYC Ordinance”)?

The NYC Ordinance covers automated screening or selection tools that provide “output”—such as scores, classifications, or recommendations—to an employer, and which are used to significantly assist or substitute a human’s decision-making process. Under the NYC Ordinance, to substantially assist or substitute a human’s decision-making process means: (1) to rely solely on a simplified output without consideration to other factors; (2) to use a simplified output as a consideration in a list of criteria but weight the output more heavily than other criteria the set; or (3) to use the output to overrule human decision-making conclusions. Candidate Relevancy is not intended by ADP to be relied upon solely by employers in making employment decisions and is not meant to substantially assist or replace discretionary decision making in employment decisions. Moreover, Candidate Relevancy is not intended to be used as a criterion that is weighted more than any other criterion in making employment decisions and is not intended to be used to overrule conclusions derived from other factors, including human decision-making.

Candidate Relevancy is intended to be one source of assistance in helping to prioritize candidates selected for next steps. Education, skills, and experience must be evaluated and validated by employers through person-to-person interviews and background checks, among other things. Candidate Relevancy is not intended to replace human judgment during any step of the recruitment process and is designed in such a way that there are no cut-off scores that would eliminate candidates from being visible to employers in the user interface. Employers are thereby provided access to all candidates, enabling them to make human decisions on which candidates to pursue.

If Candidate Relevancy is used as intended by ADP, ADP does not believe Candidate Relevancy to be an automated employment decision tool as defined by the New York City Ordinance and its related final rules.

Nothing herein is intended to be a legal opinion and does not constitute legal advice. You should consult with an attorney before taking any action in reliance on the information provided herein including whether Candidate Relevancy is an automated employment decision tool. 5. Did ADP conduct a bias audit on Candidate Relevancy? Yes. At ADP integrity is everything and is at the foundation of how we design and develop our solutions and services. Although ADP believes that Candidate Relevancy, if used as intended by ADP, does not fall within the scope of the NYC Ordinance, ADP is committed to ensuring that transparency and accountability is embedded in ADP’s offerings. ADP obtained an independent bias audit of Candidate Relevancy and Profile Relevancy from BLDS, LLC, an independent auditor, in April of 2023. The independent auditors concluded that no valid statistical evidence of bias is present in the scoring produced by Candidate Relevancy or Profile Relevancy.

6. What was the result of the bias audit conducted on Candidate Relevancy? In April of 2023, an independent auditor, BLDS, LLC, performed an impartial evaluation of Candidate Relevancy. The version of Candidate Relevancy audited became available on October 29, 2022. The independent auditors concluded that no valid statistical evidence of bias is present.

A summary of the scoring rates and impact ratios2 based on sex and race/ethnicity and the intersection of sex and race/ethnicity, and adjusted for Simpson’s Paradox, are set forth in the following charts:

Sex Categories

Applicants Scoring Rate Impact Ratio

Female 573,856 47.3% 1.000

Male 478,161 46.6% 0.986

Unknown Gender 397,979 -- --

Race/Ethnicity Categories

Applicants Scoring Rate Impact Ratio

Asian 97,576 47.0% 0.956

Black or African American 278,592 46.5% 0.946

Hispanic or Latino 196,581 47.5% 0.965

Two or More Races 40,542 48.9% 0.994

White 385,751 49.2% 1.000

Unknown Race/Ethnicity 439,250 -- --

Intersectional Categories

Applicants Scoring Rate Impact Ratio

Female

Asian 40,617 48.4% 0.992

Black or African

American

166,039 46.7% 0.957

Hispanic 99,243 46.9% 0.962

Two or More Races 22,186 48.8% 1.000

White 196,823 48.5% 0.993

Male

Asian 51,661 46.1% 0.945

Black or African

American

103,143 45.3% 0.928

Hispanic 87,864 47.1% 0.965

Two or More Races 14,187 47.1% 0.966

White 183,819 48.6% 0.996

Unknown Intersectionality 447,475 -- --

2 Consistent with the New York City Ordinance, impact ratio means either (1) the selection rate for a category divided by the selection rate of the most selected category or (2) the scoring rate for a category divided by the scoring rate for the highest scoring category. American Indian or Alaska Natives or the Native Hawaiian or Other Pacific Islanders were not included in computing the Impact Ratio because both categories had less than 1% of the population and the New York City Ordinance does not require their inclusion when computing the Impact Ratio. In the opinion of the independent auditors, the inclusion of such small numbers would allow the race/ethnicity or intersectional categories of American Indian or Alaska Natives or Native Hawaiian or Other Pacific Islanders to be the highest selection rate based on a small number of cases. Allowing such a small sample as the reference group to judge other categories is questionable as the standard for judging the results of other categories for many jobs/sectors would be set based on only a handful of cases. The table below reports the data adjusted for Simpson’s Paradox on the categories that were not used in computing the Impact Ratio. Populations Less Than 1%

Applicants Scoring Rate

Native American / Alaska Native 5,050 45.4%

Native Hawaiian / Pacific Islander 4,329 48.7%

Female Native American / Alaska Native 2,662 44.7% Male Native American / Alaska Native 1,749 46.4%

Female Native Hawaiian / Pacific Islander 2,240 50.5% Male Native Hawaiian / Pacific Islander 1,706 52.3% This analysis was conducted across all uses of Candidate Relevancy where sufficient self-ID information was available. Nothing in these FAQ’s should be taken as a guarantee that a particular client’s use of Candidate Relevancy will never result in adverse impact or bias. 7. What was the result of the bias audit conducted on Profile Relevancy? An independent bias audit of Profile Relevancy was also conducted by BLDS, LLC in April of 2023. The version of Profile Relevancy audited became available on January 4, 2023.3 The independent auditors concluded that no valid statistical evidence of bias is present. This analysis defined “selection” as candidates placed in the “High” category and in the “High or Medium” category. A summary of the selection rates and impact ratios based on sex and race/ethnicity and the intersection of sex and race/ethnicity, and adjusted for Simpson’s Paradox, are set forth in the following charts:

Sex Categories

Selection Classified as High

Applicants Selections Scoring Rate Impact Ratio

Female 965,033 403,384 41.8% 1.000

Male 753,234 313,345 41.6% 0.996

3 Candidate Relevancy and Profile Relevancy rely on the same algorithm to produce a numerical relevancy score (1 to 100). Candidate Relevancy displays the numerical score (1 to 100) to recruiters, while Profile Relevancy converts the numerical score into a High, Medium, or Low relevancy category. Because the interface is different at this time, ADP obtained separate independent bias audits for Candidate Relevancy and Profile Relevancy. Selection Classified as High or Medium

Applicants Selections Scoring Rate Impact Ratio

Female 1,016,605 754,321 74.2% 1.000

Male 798,979 589,647 73.8% 0.994

Unknown Sex 3,321,033 -- -- --

Race / Ethnicity Categories

Selection Classified as High

Applicants Selections Scoring Rate Impact Ratio

Asian 112,345 44,841 39.9% 0.926

Black or African American 382,898 161,410 42.2% 0.978 Hispanic or Latino 276,856 116,809 42.2% 0.979

Two or More Races 52,340 22,564 43.1% 1.000

White 583,157 247,424 42.4% 0.984

Selection Classified as High or Medium

Applicants Selections Scoring Rate Impact Ratio

Asian 122,964 89,946 73.1% 0.958

Black or African American 397,718 298,755 75.1% 0.983 Hispanic or Latino 287,605 215,204 74.8% 0.980

Two or More Races 53,340 40,741 76.4% 1.000

White 615,422 462,207 75.1% 0.983

Unknown Race/Ethnicity 3,619,203 -- -- --

Intersectional Categories

Selection Classified as High

Applicants Selections Scoring Rate Impact Ratio

Female

Asian 47,381 19,022 40.1% 0.871

Black or

African

American

222,814 95,186 42.7% 0.927

Hispanic/

Latino

144,144 61,157 42.4% 0.920

Two or More

Races

29,177 13,044 44.7% 0.970

White 312,614 131,826 42.2% 0.915

Male

Asian 54,302 21,990 40.5% 0.878

Black or

African

American

141,863 61,071 43% 0.934

Hispanic/

Latino

114,492 49,013 42.8% 0.929

Two or More

Races

15,126 6,974 46.1% 1.000

White 252,595 105,942 41.9% 0.910

Selection Classified as High or Medium

Applicants Selections Scoring Rate Impact Ratio

Female

Asian 51,315 37,744 73.6% 0.933

Black or

African

American

230,364 174,966 76.0% 0.963

Hispanic/

Latino

148,700 111,852 75.2% 0.954

Two or More

Races

29,633 22,998 77.6% 0.984

White 329,146 246,432 74.9% 0.950

Male

Asian 59,395 43,482 73.2% 0.928

Black or

African

American

146,513 111,052 75.8% 0.961

Hispanic/

Latino

118,486 89,467 75.5% 0.958

Two or More

Races

15,263 12,035 78.8% 1.000

White 268,121 200,124 74.6% 0.947

Unknown Intersectional 3,642,481 -- -- --

American Indian or Alaska Natives or the Native Hawaiian or Other Pacific Islanders were not included in computing the Impact Ratio because both categories had less than 1% of the population, and the New York City Ordinance does not require their inclusion when computing the Impact Ratio. In the opinion of the independent auditors, the inclusion of such small numbers would allow the race/ethnicity or intersectional categories of American Indian or Alaska Natives or Native Hawaiian or Other Pacific Islanders to be the highest selection rate based on a trivial number of cases. Allowing such a small sample as the reference group to judge other categories is questionable as the standard for judging the results of other categories for many jobs/sectors would be set based on only a handful of cases. The table below reports the data, adjusted for Simpson’s Paradox, on the categories that were not used in computing the Impact Ratio. Populations Less Than 1%

Selection Classified as High

Applicants Selections Selection Rate

Native American / Alaska Native 4,940 2,259 45.7%

Native Hawaiian / Pacific Islander 3,151 1,599 50.7% Female Native American / Alaska

Native

2,606 1,173 45.0%

Male Native American / Alaska

Native

1,257 602 47.9%

Female Native Hawaiian / Pacific

Islander

1,567 780 49.8%

Male Native Hawaiian / Pacific

Islander

831 469 56.4%

Selection Classified as High or Medium

Applicants Selections Selection Rate

Native American / Alaska Native 4,955 3,887 78.4%

Native Hawaiian / Pacific Islander 3,151 2,603 82.6% Female Native American / Alaska

Native

2,617 2,026 77.4%

Male Native American / Alaska

Native

1,257 1,031 82.0%

Female Native Hawaiian / Pacific

Islander

1,583 1,287 81.3%

Male Native Hawaiian / Pacific

Islander

847 717 84.6%

This analysis was conducted across all uses of Profile Relevancy where sufficient self-ID information was available. Nothing in these FAQ’s should be taken as a guarantee that a particular client’s use of Profile Relevancy will never result in adverse impact or bias. 8. Can applicants opt out of having their resume reviewed by Candidate Relevancy? What happens if someone opts out?

All applicants are included in the applicant queue for a recruiter to review. Individuals applying through ADP’s recruiting platforms can choose not to have their application reviewed by Candidate Relevancy or Profile Relevancy tools. Each opt-out choice is job-specific and opts the candidate out for the specific job posting only. For applicants who have chosen to opt out, their score will be listed as “Not Available,” which is the same indicator used if a relevancy score is unavailable for reasons other than opt-out (e.g., technical issues, poor resolution on resume pdf, etc.).

ADP’s Commitment to Ethical Artificial Intelligence For more information about ADP’s commitment to ethical artificial intelligence please refer to https://www.adp.com/about-adp/artificial-intelligence.aspx. For any questions or inquiries, please contact ad1p9p@r.postjobfree.com. This document and all of its contents is the property of ADP, Inc. This document is for information purposes only. Diagrams, tables, percentages and/or outcomes used in this document is for illustration purposes only. Individual outcomes vary by customer. ADP’s customers are solely responsible for its use of ADP technology. ADP will not be responsible for any liability, loss or damage of any kind resulting from or connected with the use of this document.



Contact this candidate