If current physical theories are approximately correct, human extinction is inevitable. But humanity could also enjoy a very long future.
An existential risk, following Nick Bostrom, is "an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential". Here, we focus on a narrow type of existential risk: the risk of human extinction at the hands of artificial intelligence.
AI-driven human extinction is a trope of science fiction, but it also a chief concern among researchers who study existential risks. According to one informal survey from the Future of Humanity Institute in 2008, researchers stated that there was a 5% chance that humanity will go extinct from AI by 2100.
Other information about a potential future AI catastrophe can be found in the body of this related question.