There is a newer version of the record available.

Published August 18, 2023 | Version v1
Journal article Open

Complexity and the Phonological Turing Machine

  • 1. University of Cambridge

Description

Any scientific theory, including within linguistics, requires a coherent philosophical basis in order to evaluate and decode the relations between data, phenomena, and theory. Chomsky’s (1964; 1965) identification of explanatory adequacy as one of the ultimate goals of a theory of I-language was a seminal step in this regard. This paper explores this term and its implications with special reference to phonological theory. In particular, four different groups of theories are evaluated on the metric of explanatory adequacy: Chomsky & Halle’s (1968) Rule-Based Phonology, Stampe’s (1979) Natural Phonology, Optimality Theory (Prince & Smolensky, 1993; McCarthy & Prince, 1993), and Substance-Free Phonology (SFP; Hale & Reiss, 2008; Samuels, 2009). SFP is found to be the most promising from a minimalist, computational perspective. This theoretical foundation is subsequently adapted into Watumull’s (2012; 2015) Turing programme for linguistic theory. From this perspective, the mind is viewed as equivalent to a Turing machine – the universal model of computation. Framed as such, a novel approach emerges regarding computational complexity and economy in (phonological) derivations, which are issues otherwise often only nebulously invoked in the evaluation of theories. A method of analysis is introduced by adopting ‘Big-O notation’ as used for asymptotic analysis in computer science. This method is shown to highlight the importance of strongly defining the inventory of computational primitives and procedures within a theory. Specific suggestions regarding the nature of the most optimal theory under this analysis are made with respect to Samuels’ (2009) explicitly Minimalist brand of SFP.

Files

02_02_Van_Steene.pdf

Files (438.6 kB)

Name Size Download all
md5:72eb76c7f3e81432402f2f9291bdc36f
438.6 kB Preview Download