In this paper we present an eye-tracking experiment investigating the control of visual attention during spatial decision making. Participants were presented with screenshots taken at different choice points in a large complex virtual indoor environment. Each screenshot depicted two movement options. Participants had to decide between them in order to search for an object that was hidden in the environment. We demonstrate (1.) that participants reliably chose the movement option that featured the longest line of sight, (2.) a robust gaze bias towards the eventually chosen movement option, and (3.) using a bottom-up description that captures aspects of the geometry of the sceneries depicted, we were able to predict participants ﬁxation behavior. Taken together, results from this study shed light onto the control of visual attention during navigation and wayﬁnding.