The application of generative artificial intelligence (AI) has attracted great attention since OpenAI launched the ChatGPT prototype, unveiling substantial potential to significantly change Human-AI interactions. However, little is known about how perceptions of human-like characteristics of generative AI impact or shape individuals’ psychological and behavioral responses. This study seeks to address the research question: “How do perceived AI intelligence and morality impact focal employees’ ethical voice when using generative AI? ” Drawing upon the “computer are social actors” (CASA) paradigm and research on social perceptions, we propose that individuals, when perceiving high levels of AI intelligence and AI morality, are more likely to establish a goal commitment to developing cooperative human-AI relationship. We tested our research model in a three-wave survey involving 535 employees. The results demonstrated that perceived AI intelligence was positively related to employees’ Human-AI cooperation goal commitment, motivating their subsequent ethical voice for the responsible use of generative AI. Besides, perceived AI morality can interact with AI intelligence to increase employees’ commitment to human-AI cooperation goal, leading to more ethical voice towards the responsible use of AI. These findings have implications for both theoretical understanding and practice applications concerning employees’ responses to generative AI in the workplace.