The sudden availability of generative artificial intelligence tools has given rise to concerns about individuals’ capacity to correctly evaluate and adopt outputs from generative AI. This is particularly important in the context of programming work that involves complex problem-solving and deep expertise that have been traditionally difficult to automatize. Generative AI technology can provide complete solutions to programming problems that previously required programmers to seek answers from their colleagues or online question-and-answer forums dedicated to programming. By contrast, tools based on generative AI offer programming solutions in real-time as they become integrated into professional workflows. Yet, AI-based tools are also known to occasionally produce 'hallucinations', that is, plausible-looking yet misleading or incorrect outputs. This raises a question of how programmers assess and perceive the quality of solutions produced by generative AI as opposed to human-generated solutions, and whether this results in algorithm appreciation and aversion. Understanding this dilemma is critical to programmers and their managers trying to decide how to make the best use of rapidly developing tools based on generative AI. We draw from the elaboration likelihood model and cognitive load theory to study how the source of a programming solution affects the quality of the solution and the decisions to adopt solutions produced by generative AI.