The conventional approach to speech production assumes that a linguistic control signal feeds down into an execution module where vocal articulators are coordinated. The linguistic signal takes the form of a stream of phonological units or discrete symbolic commands. This characterization reflects how a variety of control architectures in cognitive robotics are also based on symbolic commands. There are problems with symbolic motor control and in robotics there are alternatives to the assumption of symbols. This paper focuses on one such alternative. A minimal neural field model for speech motor planning and production is introduced. The model illustrates how some simple words may be represented for perception and production without coding the words in terms of phonological units. Concluding discussion considers how a scaled version of the model supports a construction grammar account of speech and language.