In an endeavor to remove human bias from the recruitment and hiring process, organizations are increasingly automating decision-making in the sourcing, screening, and selection of job-candidates. Organizations that employ algorithm-based tools do so with the belief that such tools not only decrease – or even fully eliminate – the influence of interpersonal bias, but also improve overall efficiency and effectiveness. Critics, however, have argued that because predictive algorithms are typically trained on real-world data, they merely represent new ways of codifying and perpetuating structural patterns of social inequality, ultimately decreasing equity in hiring rather than increasing it (Lambrecht & Tucker, 2019; Yarger, Cobb Payton, & Neupane, 2019).
It is from this point of departure that the current project asks: “Are we really automating equity?” We approach this question from a communication science perspective and investigate the presence, causes, and consequences of algorithmic bias as denoted by the frames that emerge throughout the recruitment and hiring process. With a focus on age and gender bias, frame content, frame construction, and framing effects are explored in three studies, each corresponding to one of the three recruitment and hiring phases respectively. The significance and implications of the present project carry weight for several stakeholders – for organizations relying on automation in recruitment and hiring, for job-seekers from various social groups, for national economies and the increasingly tightening labor markets, and for anyone interested in entrenching equity at the forefront as we become a truly digital society.