From wikipedia "the AI control problem is the issue of how to build a superintelligent agent that will aid its creators, and avoid inadvertently building a superintelligence that will harm its creators... approaches to the control problem include alignment, which aims to align AI goal systems with human values, and capability control, which aims to reduce an AI system's capacity to harm humans or gain control."
Here is an introductory video. And see this question for a definition of AGI arrival.
Will the control problem be solved before the creation of Artificial General Intelligence?
The question will resolve as Positive if expert consensus is that the control problem is solved before AGI arrival, and will resolve as Negative if AGI happens before such a consensus.
Note this is specifically about AGI, not Artificial Super Intelligence. if, in the case of a slow take-off, the control problem is solved before ASI but after AGI, the question still resolves as Negative.