Responding to recent advances in artificial intelligence (AI), experts warn of exponential and potentially catastrophic, existential risks. But there is intense debate about the likelihood and severity of such risks, partly because current theories do not adequately describe or explain such phenomena. My paper addresses this problem and contributes to theory by exposing new types of AI risk. To begin with, like others, I argue the complexity, opacity and non-explainability of advanced AI for human agents obscures significant costs and benefits which users incur. Stated formally, AI produces internalities defined as costs and benefits which agents themselves incur from their own choices, but do not account for. This contrasts externalities which agents impose on third parties but do not account for. I further theorize that the rapid diffusion of advanced AI within ecosystems leads to major internalities at this level, negative forms of which may pose exponential and potentially existential risks. Next, I argue that when negative internalities owing to AI compound negative externalities, they increase the likelihood of catastrophic outcomes. Major implications follow for the governance of AI in digital ecosystems and platforms, and for the mitigation of exponential and existential risks.