I will describe the current most likely singularity scenario (AI waking up), why that is, and the forces catalyzing it (e.g. Google’s biz plan). I will discuss the OpenAI approach of Elon Musk et al, its assumption (that we can actually regulate AI, or create friendly AI), why that assumption is mistaken, and therefore why it may actually be worse for humanity (catalyzes AI even more, without real protection on the downsides). Then I will talk about ways to frame what “humanity” actually is, and why our bio selves are not the essence of humanity, why it’s rather the patterns of intelligence. I will point to 100 years in the future where no matter the route, the majority of intelligent entities will be thought patterns on silicon; and how “AI waking up” is only one path, and how that actually could be framed as a next evolution of “humanity”, albeit not that palatable to us because our personal thought patterns don’t continue. I will describe a couple other singularity scenarios that *do* preserve our thought patterns and which are therefore more happy scenarios: the Em scenario, and the BW++ scenario. I will describe the state of the art of tech behind each, and what we can do to give a better chance to one of these scenarios happening.