Lexical Retrieval is the process of getting from an abstract concept to a spoken word. Speech perception and production result in the formation of specific lexical items needed to convey the specific concept.
Words represent sensory knowledge, images, and emotions encountered in a speakers life, and are a compilation of semantic, syntactic, morphological, and phonological properties that:
-refer to an entity
-constructed according to language specific restraints
Words can refer to entities that are concrete, abstract, or nonexistent, but must always relate to some entity.
Concrete: Dog, pencil
Abstract: Love, happiness
Nonexistent: Unicorn, Tooth Fairy
When examining a word, we can take into account 2 important parts:
-The true essence of the word’s meaning (denotation)
-All the inferences that can be made about the word (connotation)
Denotation of a word gives more concrete, agreeable facts, whereas connotation is less about the facts, and more about the experiences it means to you.The meaning of any word is based on its relationship to other words, thus no word is an island that has no semantic connections.
Lexical Items are a word, or group of words, that are stored in the lexicon which give information about:
-Phonological form (sound)
-Semantic properties (meaning)
-Grammatical function (part of speech)
-Relationships to other words (categories)
The following are several theories that each account for different aspects of lexical retrieval
When phonological form is processed, concepts, images, and sounds are accessed until the target word is reached. This theory posits that lexical retrieval is based on more general abstract features which moves to more specific features as time elapses.
For example, if you were asked to say out loud the first few words you think of when someone says the word ‘fish’ the more common responses are:
-food/pet depending on the frequency effect in terms of the speaker
But as time elapses, the listener begins to retrieve words from deeper in their lexicon and say words with more specificity/detail to the concept (fish):
A prototype of a category is any member of that category that best represents their it – they are what people have the most experience with, making them the most common of their category.
Lexical entries for typical items are learned first and accessed more frequently. This shows that certain lexical entries better represent their categories over other entries.
For example, apple would be a prototype of the category fruit.
Typical members, or prototypes, are not only recalled quicker than atypical entries, but they share more features with other members of the category due to an overlap in distinguishing features that represent the members of the category.
Ex: apple represents fruit better than tomato because of its texture, sweetness, etc. Apple also does not share features with many other categories, unlike tomato with vegetable.
Similar to prototypes, the Typicality Effect shows that individuals respond more quickly to typical examples, or prototypes, of a category than they would to atypical examples.
For example, if asked to name a bird the individual is more likely to respond with ‘robin’ or ‘eagle’ instead of ‘turkey’ or ‘penguin’. Similarly, is posed the question, ‘Is this a bird?’, the response time when showed a picture of a turkey would be longer than with a picture of a robin.
Word frequency affects word retrieval (at the morphological/phonological levels). Thus when a listener is presented with a picture/verbal representation of a word, they will retrieve the more frequent words first.
For example, if you were asked to say out loud the first few words you think of when someone says the word ‘fruit’, you are more likely to say apples, banana, or oranges, before retrieving mango, plum, or kiwi from your lexicon.
Semantic Network Models
Collins and Ouillian (1969) examined RT’s in measuring recognition of truth conditions, showing that sentences with fewer connections to make have a quicker verification time. Sentences with a one to one correlation, or no connections to make, have the quickest response time. These conditions are based on category size and what is linked to the specific hierarchy.
In the following example each following sentences has more connections to make, thus prolonging the response time for verification.
-Robin is a robin. (Fastest RT because there are no connections to make)
-Robin is a bird. (Next fastest RT because there is one connection to make)
-Robin is an animal. (Next fastest RT because one connection to make. Slower than the previous because the category for animals is larger than that of birds)
-Robin is a fish. (Slowest RT because there are no connections to make)
Hierarchical Network Model
Lexical entries are stored based on hierarchies: the number of lexical items related to the feature. The less related lexical items, the quicker the response time.
For example take a look at the sentence verification times:
-Robins eat worms. 1310 ms
-Robins have feathers. 1380 ms
-Robins have skin. 1470 ms
With the first sentence, not much else comes to mind when thinking of what eats worms. The next sentence you have the connections of all birds that have feathers, not just robins – making the RT a bit longer. The last sentence has the slowest RT because the number of items that have skin is much larger than the first two categories dealt with.
Issues with this theory is that the model only accounts for concrete entities. Those that are abstract such as love, justice, or truth, do not have hierarchies involved – we only have our own experiences to define them.
Conrad (1972) and Wilkins (1971) showed that hierarchical effects do not work across the board, making RT vary due to attribution ratios. When the prime is attributed to the target alone, the RT is very quick. However, when the prime is attributed to other lexical items, then the RT is affected by the overlap of attribution.
-A robin has a red breast (attributed only to robins) Fastest due to one to one correlation
-A robin has wings (attributed to all birds) Next fastest due to a small number of entities with wings
-A robin has lungs (attributed to all animals) Slowest due to the larger number of entities with lungs.
This shows us that we do not reject all untrue statements equally.
For example, take the following statements:
-Ice cream is an elephant.
-Ice cream is a spoon.
Both statements, equally untrue, but differ in response times. The nouns in sentence 2 are more semantically related than those in sentence 1, showing that the more related items are, the more difficult it is to distinguish them.
Priming shows that previous lexical entries will influence lexical retrieval. There are several types of priming that will influence retrieval. When pathways and connections are well kept, RT is quicker. But when there is not a good pathway the RT is slower.
Form priming: phonological relation between the prime and target. (Sand for Hand)
Semantic priming: semantic relation between the prime and target, sharing specific elements of semantic features. (Car for Truck)
Orthographic priming: prime and target share initial orthography (Track for Tractor)
Morpheme priming: morphologically complex items prime for the root. (Miscommunication for Communicate)
Priming based on binary features:[+/- sem] semantic transparency based on shared semantic features
blueberry primes for blue (vs. strawberry does not prime for straw)[+/- morph] morphological composition
happiness primes for happy; morphologically complex item primes for its root[+/- phon] phonological form
network primes for net; lexical items sharing initial sounds may elicit quicker responses
Donavon Thomas (2016) (Adapted from Robin Aronow)