While Artificial Intelligence (AI) offers many benefits for organizations, work, and society, it is also creates risks and vulnerabilities that challenge trust in its use, with recent global surveys demonstrating low levels of trust in AI. This timely, systematic review takes stock of the rapidly expanding literature on trust in AI, revealing that 63% of the research has been published since 2020. We synthesize research insights around five key trust challenges that are unique to, or exacerbated by AI: (1) transparency and explainability, (2) accuracy and reliability, (3) data extraction, privacy, and personalization, (4) automation, human autonomy, and agency, and (5) anthropomorphism and embodiment. For each trust challenge, we identify key findings and boundary conditions, and identify inherent tensions and paradoxes in addressing them. In so doing, we identify and challenge an implicit assumption in the literature: that increasing trust in AI is inherently a good thing. We argue that responsible AI use requires well-calibrated trust and examine the literature for insights that inform appropriate levels of trust to promote a critical, human-centered approach to AI. We conclude with future research directions and recommendations to address methodological limitations in the literature.