Organizations increasingly rely on algorithms to increase the efficiency of their personnel selection practices. However, such algorithms can have an adverse impact on demographic subgroups (such as different genders, age groups, or ethnicities); yet, equalizing differences in test scores between these subgroups comes at the expense of sacrificing some efficiency of the algorithm for its group fairness. Drawing upon Folger and Cropanzano’s fairness theory, we test a conceptual model of antecedents of efficiency versus fairness choices in the context of algorithm-based personnel selection in a gender-based fairness scenario using a series of four experiments (Experiment 1: 283 MTurkers; Experiment 2: 276 MTurkers; Experiment 3: 277 MTurkers; Experiment 4: 239 managers and 247 graduate students). We find that the (a) extent of fairness violations, (b) individual differences in fairness perceptions, and (c) the baseline-efficiency of the algorithm affect the choice between a more efficient or a fairer algorithm, whereas (d) the stakeholder perspective receives mixed support and (e) the fairness concept applied (i.e., statistical parity versus equal opportunity) does not have an effect. Our research contributes to the literature on algorithm ethics.