- 1.85% of companies claim to practice skills-based hiring, but only 0.14% of hires are actually affected by degree requirement removal (Harvard Business School/Burning Glass Institute)
- 2.Removing degree filters creates 19x larger candidate pools, opening access to 70M+ STARs (Skilled Through Alternative Routes) in the U.S. workforce (LinkedIn Economic Graph / Opportunity@Work)
- 3.Skills assessments predict job performance 2x better than unstructured interviews alone (Schmidt & Hunter 1998 meta-analysis). Structured interviews + work sample tests have the highest validity
- 4.Legal compliance is non-negotiable: assessments must be job-related and consistent with business necessity under Title VII. The 4/5ths rule and Griggs v. Duke Power set the framework for fair selection
- 5.Companies genuinely implementing: Google dropped degree requirements in 2018, followed by Apple, IBM, Accenture, and state governments including Maryland, Colorado, and Pennsylvania
$140,030
HR Manager Median Salary
$109,840
I/O Psychologist Median Salary
8%
HR Specialist Job Growth
14%
I/O Psychologist Job Growth
The Rhetoric vs. Reality of Skills-Based Hiring
The headline statistic that defines this movement is uncomfortable: 85% of employers say they practice skills-based hiring, but Harvard Business School and the Burning Glass Institute found that only 0.14% of hires are actually affected by degree requirement removal. That's not a rounding error. That's a nearly complete disconnect between what companies say and what they do.
Between 2020 and 2024, roughly 25% of employers removed degree requirements from at least some job postings. The press releases wrote themselves. But when researchers tracked actual hiring outcomes, degree-holders continued to be hired at almost exactly the same rates as before. The job postings changed. The hiring didn't. This pattern, sometimes called 'paper ceiling' removal, represents the most common version of skills-based hiring in practice: symbolic credential removal without any change to how candidates are actually screened, assessed, or selected.
TestGorilla's 2024 State of Skills-Based Hiring report found that 70% of employers say they use skills-based hiring practices. But saying you use skills-based hiring and actually replacing credential-based screening with validated competency assessment are very different things. Most organizations in the '70%' are doing one or more of the following: removing 'bachelor's degree required' from the job posting text, adding a skills assessment as one step in a process that still heavily weights credentials, or asking behavioral interview questions they describe as 'competency-based' without any structured scoring.
None of those are bad practices. But they're not what the research means by skills-based hiring. The research means systematic assessment of job-relevant competencies, validated against actual performance outcomes, as the primary basis for selection decisions. Almost nobody is doing that. This matters because HR professionals are the ones tasked with implementing these initiatives. Understanding what the evidence actually says, rather than what the vendor pitch decks claim, is the difference between building something real and running a rebranding exercise. See in-demand HR skills for the competencies driving modern HR practice.
The Science of Assessment: What Actually Predicts Performance
The foundational research on personnel selection comes from Frank Schmidt and John Hunter's 1998 meta-analysis, which synthesized 85 years of research on the validity of different hiring methods. Their findings, updated in subsequent analyses, remain the most comprehensive evidence base for assessment decisions in HR. The key finding: general mental ability (GMA) tests combined with structured interviews or work sample tests produce the highest predictive validity for job performance, roughly twice the accuracy of unstructured interviews alone.
Here's the validity hierarchy from that research, expressed as correlation coefficients with job performance. Work sample tests: .54. Structured interviews: .51. General mental ability tests: .51. Job knowledge tests: .48. Integrity tests: .41. Conscientiousness measures: .31. Reference checks: .26. Unstructured interviews: .38. Years of education: .10. Years of job experience: .18. The numbers tell a clear story: what people can demonstrate (work samples, structured responses to standardized questions) predicts performance far better than what they've accumulated (years of experience, credentials).
Unstructured interviews are particularly problematic because they feel highly informative while being mediocre predictors. Interviewers develop strong confidence in their assessments during unstructured conversations, but that confidence doesn't correlate with accuracy. The psychological mechanism is well-documented: we form rapid impressions based on similarity, social fluency, and demographic cues, then spend the rest of the interview confirming those impressions. This is why structured approaches matter. Same questions, same order, predefined scoring rubric, multiple assessors. Structure constrains the bias that unstructured interaction amplifies.
For talent acquisition professionals and recruiters, this research has direct practical implications. The most valid selection system combines a cognitive ability measure (or work sample that requires cognitive engagement), a structured interview, and a conscientiousness or integrity assessment. Each method captures different aspects of the candidate. Together, they create a multi-method assessment that predicts performance substantially better than any single method.
Psychometric Tools for HR Professionals
Psychometric assessment in hiring falls into several categories, each measuring different constructs with different levels of predictive validity. Understanding what each tool actually measures, and what it doesn't, is essential for building a defensible selection system. The psychometric assessment market is growing rapidly as organizations seek alternatives to credential-based screening, but not every tool on the market has adequate validity evidence.
Cognitive ability tests measure general mental ability, including reasoning, problem-solving, learning speed, and pattern recognition. They're among the strongest single predictors of job performance across virtually all job types (Schmidt & Hunter 1998). However, they also show the largest group differences of any assessment method, which creates adverse impact concerns under Title VII. This is the fundamental tension in cognitive testing: high validity, high adverse impact. HR professionals must navigate this carefully. Options include using cognitive measures as one component in a multi-method battery, using job-specific cognitive tests rather than general intelligence measures, and ensuring assessments are demonstrably job-related through proper job analysis.
Personality assessments, particularly those based on the Big Five (OCEAN) model -- Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism -- measure behavioral tendencies that relate to job performance in specific contexts. Conscientiousness is the most consistent predictor across job types (.31 validity). Other dimensions predict performance in role-specific ways: Extraversion predicts sales and leadership performance, Agreeableness predicts teamwork and customer service, Openness predicts creative and training performance. The Big Five model has decades of validation research, unlike many proprietary personality tools that lack independent validation evidence.
Situational judgment tests (SJTs) present candidates with realistic work scenarios and ask them to choose or rank responses. They measure practical judgment and applied knowledge, typically showing moderate validity (.34) with lower adverse impact than cognitive tests. SJTs can be customized for specific roles and organizations, making them especially useful for entry-level and mid-level positions. Work sample tests go further by having candidates perform actual job tasks under standardized conditions. A coding exercise for software developers, a writing sample for content roles, a case presentation for consultants. Work samples have the highest face validity (candidates see the relevance) and strong predictive validity (.54), but they're more expensive to develop and administer.
For HR professionals evaluating assessment vendors: ask for validity evidence from independent research, not just the vendor's own studies. Ask about adverse impact data. Ask whether the tool was developed using job analysis data or general constructs. Ask about test-retest reliability. A credible assessment provider will have this data and share it readily. One that deflects or substitutes marketing claims for psychometric evidence is selling a product, not a validated tool. See HR analytics career for how data skills apply to assessment and selection.
Legal and Ethical Considerations in Skills-Based Selection
Important disclaimer: This section provides educational information about employment law concepts relevant to skills-based hiring. It is not legal advice. Organizations implementing selection systems should consult qualified employment attorneys to ensure compliance with federal, state, and local laws.
The legal framework for employee selection in the United States was fundamentally shaped by Griggs v. Duke Power Co. (1971), where the Supreme Court established that employment practices that disproportionately exclude protected groups must be justified by business necessity. The company's requirement of a high school diploma and passing scores on two general aptitude tests was struck down because neither requirement was shown to be related to job performance. The principle is directly relevant to skills-based hiring: any assessment tool you use must be demonstrably job-related, not just face-valid.
The Uniform Guidelines on Employee Selection Procedures (1978) provide the regulatory framework, including the 4/5ths (or 80%) rule for identifying adverse impact. If the selection rate for a protected group is less than 80% of the rate for the group with the highest selection rate, adverse impact may exist. When adverse impact is present, the employer bears the burden of demonstrating that the selection procedure is valid and job-related. This requires evidence: content validity (the test samples actual job content), criterion-related validity (test scores correlate with job performance), or construct validity (the test measures psychological constructs shown to relate to performance).
Title VII of the Civil Rights Act of 1964, enforced by the EEOC, prohibits employment discrimination based on race, color, religion, sex, and national origin. The EEOC has specifically addressed assessment tools in its Uniform Guidelines and subsequent guidance documents. Key requirements: assessments must be job-related and consistent with business necessity, employers should explore alternative procedures with less adverse impact, and assessments should be validated for the specific positions in which they're used. Simply purchasing an off-the-shelf assessment tool does not establish legal defensibility. The employer must demonstrate the connection between the assessment and the specific job requirements.
For HR professionals building skills-based selection systems, the practical takeaway is: start with job analysis. Document what competencies are actually required for the role. Select or develop assessments that measure those specific competencies. Monitor outcomes for adverse impact. Validate your tools against actual performance data when possible. And document everything. The legal standard is not perfection. It's a good-faith effort to use job-related, validated selection methods. See employment law basics and EEOC guidelines for foundational legal concepts, and consult legal counsel for specific implementation guidance.
Source: SHRM 2025 Benchmarking Report
What's Actually Working: Companies Beyond the Press Release
A handful of organizations have moved past the symbolic gesture of removing degree requirements and made structural changes to how they assess and select candidates. Google dropped degree requirements for many positions starting in 2018, and by 2022 roughly 50% of their U.S. workforce did not hold a four-year degree in some business units. The change wasn't just about removing a checkbox. Google invested in structured interview processes, standardized rubrics, and work sample assessments that could evaluate candidates consistently regardless of educational background.
IBM has been more vocal than most about their shift, publicly committing to identifying roles by skills rather than degrees and creating 'new collar' job categories that emphasize demonstrated capability over credentials. Apple, Accenture, and other large employers followed with similar announcements. But the Harvard Business School/Burning Glass research suggests that even among these high-profile adopters, the actual hiring impact has been modest. The degree-free job postings attract different applicant pools, but hiring managers continue to favor candidates with traditional credentials unless the assessment process explicitly provides an alternative signal of capability.
The public sector has been more systematic. Maryland became the first state to remove degree requirements from thousands of government positions in 2022, followed by Colorado, Pennsylvania, and more than a dozen other states. Government hiring often uses more structured processes to begin with (civil service exams, scored applications, standardized interviews), which makes the shift to competency-based selection more operationally feasible. Opportunity@Work estimates there are more than 70 million STARs (Skilled Through Alternative Routes) in the U.S. workforce, meaning workers who have skills gained through community college, military service, workforce training, or on-the-job experience rather than four-year degrees.
What distinguishes the organizations seeing real results from those generating press coverage? Three things. First, they changed the assessment process, not just the job posting. Removing 'bachelor's degree required' means nothing if the resume screener still filters for university names. Second, they invested in structured, validated alternatives -- work samples, structured interviews with scoring rubrics, or skills assessments with demonstrated validity. Third, they trained hiring managers to evaluate candidates using those alternative signals rather than defaulting to credential heuristics. The third step is where most implementations fail. See recruiting best practices for more on building effective hiring processes.
Building a Skills-Based Hiring Program That Works
If your organization is serious about skills-based hiring rather than just rewriting job postings, the process starts with job analysis. Not the generic job description sitting in your HRIS. A rigorous analysis of what competencies are actually required for successful performance in the role. What does a top performer do differently from an average performer? What knowledge, skills, and abilities distinguish success from failure in the first year? The answers to these questions define what you're selecting for, and they must come from observable job behavior, not from assumptions about which credentials produce that behavior.
From job analysis, you build a competency model: a structured framework of the knowledge, skills, abilities, and other characteristics (KSAOs) required for the role. This is where many organizations skip steps. They go directly from 'we want skills-based hiring' to 'let's buy an assessment platform.' Without a validated competency model, you don't know what you're assessing, which means you can't demonstrate job-relatedness, which means your system is both legally vulnerable and unlikely to predict performance. Competency modeling is the bridge between the job and the assessment. See organizational development specialist career for how OD professionals approach this work.
Assessment design follows from the competency model. For each critical competency, select or develop an assessment method with demonstrated validity for measuring that construct. Practical options include: structured behavioral interviews (where candidates describe past situations using the STAR format, scored against predefined rubrics), work sample tests (candidates complete a representative task under standardized conditions), situational judgment tests (candidates choose how they'd handle realistic scenarios), job knowledge tests (for roles requiring specific technical knowledge), and validated psychometric instruments (cognitive ability, personality, integrity). The strongest systems use multiple methods, because each captures different variance in candidate capability.
Validation is the step most organizations skip entirely. After implementing your assessment battery, track how assessment scores relate to actual job performance over time. Do candidates who score higher on your work sample test perform better in the role six months later? If yes, you have criterion-related validity evidence. If no, your assessment is measuring something, but not something that matters for performance. Validation requires patience, sample size, and a commitment to adjusting your system based on data rather than intuition. HR analytics professionals are increasingly essential for this work.
Finally, train your hiring managers. The best assessment system in the world fails if the hiring manager ignores the data and hires based on gut feeling. Structured interviewer training, calibration sessions where interviewers practice scoring together, and accountability mechanisms (requiring documentation of how assessment results informed the decision) are what separate skills-based hiring programs that work from ones that exist only on paper. The talent acquisition certification covers many of these competencies.
The Psychology of Fair Assessment
Industrial-organizational (I-O) psychology has spent decades studying how to make selection fair and predictive simultaneously. The central tension is real: some of the most valid predictors of job performance (notably cognitive ability tests) also show the largest group differences. There's no easy solution to this. But there are principled approaches grounded in decades of research that can help organizations build selection systems that are both effective and equitable.
Structured assessment is the single most impactful bias-reduction strategy in hiring. When every candidate answers the same questions, in the same order, scored against the same rubric by trained assessors, you constrain the space in which bias operates. Unstructured interviews let interviewers form impressions based on rapport, similarity, communication style, and demographic cues, then rationalize those impressions as 'fit.' Structure doesn't eliminate bias, but it provides accountability and consistency that unstructured processes lack entirely.
Multi-method assessment batteries with compensatory scoring (where strength on one assessment can offset weakness on another) tend to reduce adverse impact compared to using any single method as a pass/fail screen. For example, a system that combines a cognitive ability test with a personality measure and a structured interview, weighting each appropriately based on job analysis, will typically show less adverse impact than using cognitive testing alone as a threshold. The goal is to capture the full range of competencies relevant to the role, rather than over-indexing on a single construct.
Transparency matters for both fairness and candidate experience. Candidates who understand what's being assessed and why tend to perceive the process as fairer, even when they don't get the job. Providing clear instructions, realistic previews of assessment content, and feedback where feasible improves both the equity and the employer brand dimensions of your selection process. LinkedIn data shows that removing degree filters creates 19x larger candidate pools. That expanded pool only produces better outcomes if your assessment process can accurately identify capability among candidates with non-traditional backgrounds.
For HR professionals building or evaluating selection systems, the I-O psychology principles are straightforward even if the implementation isn't easy: base assessments on job analysis, use multiple validated methods, structure everything you can, monitor outcomes by group, and validate against actual performance. These aren't aspirational ideals. They're the professional standard established by decades of research, codified in the Uniform Guidelines, and reinforced by case law. Organizations that follow them hire better. Organizations that don't are guessing with a veneer of process.
Frequently Asked Questions
Sources
- 1.Harvard Business School / Burning Glass Institute. Managing the Future of Work — Research on skills-based hiring adoption vs. actual hiring outcome changes (85% claim, 0.14% impact)
- 2.LinkedIn Economic Graph — Data on degree requirement removal trends and candidate pool expansion (19x larger pools)
- 3.Schmidt, F.L. & Hunter, J.E. (1998). 'The Validity and Utility of Selection Methods in Personnel Psychology' — Foundational meta-analysis of 85 years of personnel selection research, validity coefficients for assessment methods
- 4.SHRM. Society for Human Resource Management — Industry standards, competency modeling resources, and SHRM-aligned program curriculum
- 5.U.S. Equal Employment Opportunity Commission — Uniform Guidelines on Employee Selection Procedures, Title VII enforcement, assessment compliance guidance
- 6.Opportunity@Work — Research on STARs (Skilled Through Alternative Routes), 70M+ workers with skills gained outside four-year degrees
- 7.TestGorilla. 2024 State of Skills-Based Hiring Report — Survey data on employer adoption of skills-based hiring practices (70% of employers)
Related Resources
Taylor Rupe
Education Researcher & Data Analyst
B.A. Psychology, University of Washington · B.S. Computer Science, Oregon State University
Taylor combines training in behavioral science with data analysis to evaluate HR education programs. His research methodology uses IPEDS completion data, BLS employment statistics, and SHRM alignment data to produce evidence-based program rankings.
