In the era of AI-human collaborations within organizations, preserving independent human judgment is crucial. Recent studies reveal a concerning trend known as "algorithmic conformity," where individuals frequently adhere to flawed algorithmic recommendations. Despite the documented prevalence of this behavior, the underlying mechanisms have remained elusive. Addressing this gap, our study proposes two key mechanisms influencing individuals' conformity to algorithmic recommendations in work-related contexts: the perceived superior capabilities of algorithms ("authority of competence") and the belief that these algorithms may hold formal authority in the work environment, including the ability to enforce consequences for non-compliance ("formal authority"). Through realistic gig-work simulations involving 1,134 participants, our experiments demonstrate that algorithmic compliance is driven by perceived competence but is significantly amplified when workers attribute formal authority to algorithms. Additional experiments reveal that explicitly acknowledging AI's formal authority intensifies algorithmic conformity. Remarkably, this inclination persists even in task domains perceived as less suitable for AI, such as facial sentiment recognition. In this context, algorithmic conformity is primarily driven by perceptions of formal authority rather than competence. These findings underscore the inherent risks associated with algorithmic collaboration, emphasizing the imperative to educate, empower and train humans-in-the-loop to exercise their judgment in gig-economy settings and beyond.