# Convergence Bounds for Language Evolution by Iterated Learning

- Anna Rafferty,
*University of California, Berkeley*
- Thomas Griffiths,
*University of California, Berkeley*
- Dan Klein,
*University of California, Berkeley*

## Abstract

Similarities between human languages are often taken as evidence
of constraints on language learning. However, such similarities could also be
the result of descent from a common ancestor. In the framework of iterated
learning, language evolution converges to an equilibrium that is independent of
its starting point, with the effect of shared ancestry decaying over time.
Therefore, the central question is the rate of this convergence, which we
formally analyze here. We show that convergence occurs in a number of
generations that is O(n log n) for Bayesian learning of the ranking of n
constraints or the values of n binary parameters. We also present simulations
confirming this result and indicating how convergence is affected by the entropy
of the prior distribution over languages.

Back to Saturday Papers