Probabilistic Language Modeling with Hidden Stochastic Automata

Abstract

In this paper, we introduce a novel dynamical Bayesian network model for probabilistic language modeling. We refer to this as the Hidden Stochastic Automaton. This model, while based on a generalization of the Hidden Markov model, has qualitatively greater generative power than either the Hidden Markov model itself or any of its existing variants and generalizations. This allows the Hidden Stochastic Automaton to be used as a probabilistic model of natural languages in a way that is not possible with existing dynamical Bayesian networks. Its relevance to Cognitive Science is primarily as a computational --- in the Marr (1982) sense of the term --- model of cognition, but potentially also as a model of resource bounded cognitive processing, and as a model of the implementation of computation in physical dynamical systems.


Back to Table of Contents