In this paper, we investigate the issues that arise when spatial abstractions do not capture all the details necessary for correct internal reasoning. We argue that in a general-purpose reasoning system, an imperfect abstract problem representation might be all that is available for any given problem. We propose that some forms of such imperfect representations are still useful in problem solving and can be the basis for heuristic transfer of learning between problem instances. However, there are cases when they are inadequate, such as for tasks where improper actions might have dire consequences. To compensate, an agent can use a concrete problem representation based on imagery in parallel with the abstract representation to predict the consequence of actions, thereby avoiding mistakes. A model is presented showing the usefulness of imagery to handle aspects of problem solving that the available high-level representation cannot.