Visual word recognition in alphabetic languages such as English has been shown to have left hemisphere (LH) lateralization and argued to be linked to the LH superiority in language processing. Nevertheless, Chinese character recognition has been shown to be more bilateral or right hemisphere (RH) lateralized and thus is a counterexample of this claim. LH processing has been shown to have a high spatial frequency (HSF) bias, whereas RH processing has a low spatial frequency bias. Through computational modeling, here we test the hypothesis that English word recognition is lateralized to the LH and Chinese to the RH due to visual characteristics of words instead of language lateralization. We show that at least two factors may account for this dichotomy: (1) Visual similarity among words: The smaller the alphabet size is, the more similar the words in the lexicon are, and the more the model relies on HSFs to distinguish words. (2) The requirement to decompose words into letters in order to map to phonemes during learning to read English: Mapping word input to its constituent letters requires more HSF information compared with mapping to its word identity. English has a large lexicon size but only 26 letters, whereas Chinese has a much smaller lexicon with a much larger alphabet (stroke patterns). In addition, Chinese is a logographic system: stroke patterns do not map to phonemes and thus no decomposition is required. Hence, the lateralization of visual word recognition in different languages may depend on visual characteristics of words instead of the LH language lateralization as previously thought.