Elsevier

Cognition

Volume 115, Issue 1, April 2010, Pages 39-45
Cognition

Grasping spheres, not planets

https://doi.org/10.1016/j.cognition.2009.11.006Get rights and content

Abstract

Memory for objects helps us to determine how we can most effectively and appropriately interact with them. This suggests a tightly coupled interplay between action and background knowledge. Three experiments demonstrate that grasping circumference can be affected by the size of a visual stimulus (Experiment 1), whether that stimulus appears to be graspable (Experiment 2), and the presence of a label that renders that object ungraspable (Experiment 3). The results are taken to inform theories on conceptual representation and the functional distinction that has been drawn between the visual systems for perception and action.

Introduction

According to ecological approaches to cognition (Gibson, 1979, Glenberg, 1997), memory for objects helps us to determine how we can act given the constraints inherent to our bodies and what we are capable of doing in a given environment (i.e., “mesh”). Neuroimaging (Martin, Wiggs, Ungerleider, & Haxby, 1996) and behavioral data (Myung, Blumstein, & Sedivy, 2006) are consistent with the claim that functional actions are primed by functional objects (e.g., tools) and, at least partially, by nouns (object labels) that refer to them (e.g., Bub et al., 2008, Grafton et al., 1997). For example, when participants passively view or imagine using tools, more extensive activation is found in motor-related cortical regions, relative to visually similar objects (Creem-Regehr, Dilda, Vicchrilli, Federer, & Lee, 2007) and merely graspable objects (Creem-Regehr & Lee, 2005). This motor activation is often interpreted as contributing to object categorization (Martin et al., 1996), motor imagery (Postle, McMahon, Ashton, Meredith, & de Zubicaray, 2008), or conceptual representation (Bub & Masson, 2006). Indeed, the action-relevant background knowledge that nouns provide about objects affects the hand during reach-to-grasp movements. For example, when grasping a block of fixed size, a participant’s hand aperture is larger when “apple” is read as opposed to when “grape” is read (Glover, Rosenbaum, Graham, & Dixon, 2004).

When people make reach-to-grasp responses to familiar objects, what information do they rely on? One possibility is that they rely exclusively on visual information to guide their grasp. A second possibility is that they rely on background knowledge about the object in question. A third possibility is that they rely on a combination of visual and background knowledge. More than a decade-and-a-half ago, Goodale and Milner (1992) distinguished between two systems of human vision that correspond to the anatomical separation between the dorsal and the ventral visual streams. The dorsal stream is necessary for normal visuomotor guidance (“vision for action”), whereas the ventral stream is necessary for object and scene recognition.

Much subsequent research has examined the role of the ventral stream, and thus memory representations, in visuomotor guidance. During the offline control of action (e.g., during planning), the ventral stream provides background information for the control of actions (Glover, 2004, Goodale, 2008). In line with this, behavioral studies demonstrate that action plans are influenced by relevant background and goal-related information. For example, grasping a familiar object requires knowing about its purpose. Crucially, coherent, goal-directed grasping partially relies on semantic processing; a concurrent semantic task (e.g., a free-association task) disrupts participants’ ability to grasp an object in a manner that is appropriate for future action, relative to a non-semantic control task (e.g., articulatory suppression; Creem & Proffitt, 2001, Experiment 3). Moreover, co-actors seamlessly use information gleaned from each other, for example, about an object’s weight (Meulenbroek, Bosga, Hulstijn, & Miedl, 2007), and information about shared goals (Sebanz, Knoblich, Prinz, & Wascher, 2006) in order to act and co-act more effectively. One would grasp a pair of scissors by the handles when preparing to use them and by the blades when preparing to hand them to someone else; in general, goal-directed action consists of imposing an internally pre-specified, desired effect on one’s environment (Waszak et al., 2005). Each result is consistent with the claim that background information is routinely recruited from a variety of sources (semantic memory, one’s goals, the environment, or peers) in order to guide behavior.

Moreover, visuomotor guidance may be “contaminated” by exogenous semantic information. For example, when participants reach for objects, their hand movements are affected by seemingly irrelevant semantic information, such as words affixed to the objects (Gentilucci & Gangitano, 1998) or incidentally presented adjectives (Glover & Dixon, 2002). Findings such as these can be accommodated by assuming that the dorsal stream is comprised of two functionally distinct streams: the dorso-dorsal stream, whose function it is to provide on-line control of actions (like the dorsal stream in Milner and Goodale’s original framework), and the ventro-dorsal stream, which plays a role in action organization as well as action understanding and space perception (Rizzolatti & Matelli, 2003).

A neuroimaging study tested the effects of object “identity” and object-orientation on the dorsal and ventral streams (Valyear, Culham, Sharif, Westwood, & Goodale, 2006), participants passively viewed two images of objects presented in succession (with a 1.25 s mask separating each image). The second image was either (1) exactly the same as the first, (2) the same object, but oriented differently, (3) a different object, but oriented identically, or (4) a different object that was differently oriented. The results demonstrated a double dissociation: the dorsal stream was sensitive to changes in object orientation (but not to object changes) while the ventral stream was sensitive to object changes (but not to orientation changes). A follow-up study using a similar paradigm (Rice, Valyear, Goodale, Milner, & Culham, 2007) found that the dorsal stream responded selectively to orientation changes for graspable objects (e.g., tools) but not larger, non-graspable objects (e.g., vehicles). The results were taken to demonstrate that the dorsal stream is sensitive to action-relevant information about visually presented graspable objects (orientation), but not to information that is relevant to the “identity” of objects.

The primary motivation of the present series of experiments was to examine the boundaries of the semantic contamination effect on grasping. That is, does the availability of semantic information about an object exert a top-down influence on the relationship between its grasp-relevant physical properties and grasping behavior? The results can inform theories on the functional distinction that has been drawn between the ventral and dorsal streams and concept representations.

Participants made responses to objects shown on a computer screen, the imperative stimuli, by grasping and squeezing a different object, the pressure bulbs. There were two questions of interest. The first question was whether responses to the pressure bulbs would be influenced by the perceived affordances (Gibson, 1979), “graspability”, of the imperative stimulus (Experiments 1 and 2). The second question was whether the visual “graspability” of the imperative stimulus could be overridden by a verbal label (Experiment 3).

Participants held down two keys on a keyboard and then made a response to a visually presented stimulus, either to its shape (Experiments 1 and 3) or to its color (Experiment 2), by moving their hand to and squeezing one of two pressure-sensitive rubber bulbs mounted on either side of the computer screen. The bulbs were connected via tubes to a pressure gauge, which measured the air pressure inside the bulbs and tubes system, thereby providing an estimate of the amount of force used to squeeze the bulbs. When the air pressure passed a certain threshold, the visual stimulus disappeared from view. Participants received practice trials so that they could calibrate to the amount of force needed to advance to the next trial; participants were instructed to calibrate in this manner during the practice trials. For the current experiments, the threshold was set to 10 kPa. This threshold is rather low, given that the average response tends to be well over 20 kPa and the maximum possible response was approximately 80 kPa.

The bulb apparatus was a closed system (i.e., the amount of air in the apparatus remained constant during a response), so there was a functional relationship between the circumference of a participant’s final grip and the amount of force that needed to be applied in order to reach that final state. We therefore assumed that maximum squeeze pressure could be taken as a measure of final hand aperture. That is, decreases in final grip size corresponded to increases in the pressure measured by the apparatus.

Section snippets

Experiment 1

Participants responded to spheres and cubes presented on a computer screen, which had diameters (or side lengths, in the case of cubes) of 100, 150, 300, and 400 pixels (which correspond to approximate actual display sizes of 4.0, 6.0, 12.0, and 16.0 cm) respectively (see Fig. 1). They responded with their right hand if the shape was a sphere and with their left if it was a cube. Only responses given with the dominant (right) hand were recorded. The key manipulation was that the 4.0 cm stimuli

Experiment 2

The results of Experiment 1 do not allow us to determine whether the affordance effect is due to grasping affordance of the object in question or to some other aspect of its size. To adjudicate between these possibilities, we created a second set of stimuli, spiked spheres (see Fig. 2). By attaching spikes to the spheres, we intended to render them “ungraspable” and thus remove the grasp affordance. We also retained the “spikeless” spheres from Experiment 1. Thus, if the grasp affordance

Experiment 3

Experiment 2 suggests that the effect is due to perceived affordances of the stimuli. However, these affordance effects were based on visual information. Is it possible to elicit affordance effects by using verbal labels to access memory representations? If so, this would strongly implicate the role of memory representations in grasping. We tested this hypothesis by using the stimuli from Experiment 1, but referring to them as “planets.” This label should activate background knowledge that

General discussion

These experiments investigated how the perceived affordances of a visual stimulus affect the grasping of a different object. Experiment 1 showed that the smallest of four spheres shown on a computer monitor yields smaller grasps (operationalized as greater squeeze force) than the other spheres. The defining characteristic of the smallest sphere was that its diameter appeared smaller than that of the rubber bulbs the participants were using to make responses, whereas the other spheres appeared

References (37)

  • R.A. Abrams et al.

    Mental chronometry: Beyond reaction time

    Psychological Science

    (1991)
  • A. Angel

    Input–output relations in simple reaction time experiments

    Quarterly Journal of Experimental Psychology

    (1973)
  • D.N. Bub et al.

    Gestural knowledge evoked by objects as part of conceptual representations

    Aphasiology

    (2006)
  • S.H. Creem et al.

    Grasping objects by their handles: A necessary interaction between cognition and action

    Journal of Experimental Psychology: Human Perception and Performance

    (2001)
  • S.H. Creem-Regehr et al.

    The influence of complex action knowledge on representations of novel graspable objects: Evidence from functional magnetic resonance imaging

    Journal of International Neuropsychological Society

    (2007)
  • M. Gentilucci et al.

    Influence of automatic word reading on motor control

    European Journal of Neuroscience

    (1998)
  • J.J. Gibson

    The ecological approach to visual perception

    (1979)
  • A.M. Glenberg

    What memory is for?

    Behavioral and Brain Sciences

    (1997)
  • Cited by (17)

    • Influence of colour on object motor representation

      2022, Neuropsychologia
      Citation Excerpt :

      Using this task, many expectations can be drawn. First, we expected grasp compatibility effects between the size/shape of the visual object and the response grip, as frequently reported in literature (Derbyshire et al., 2006; Ellis and Tucker, 2000; Grèzes et al., 2003; Taylor and Zwaan, 2010; Vainio et al., 2008). The grasp compatibility effect refers to faster response times when the grasp performed by participant matches the grasp commonly used for the displayed object (e.g., coin and index-finger grip) than when they are not matched (e.g., coin and whole hand grip).

    • Motor expertise modulates movement processing in working memory

      2013, Acta Psychologica
      Citation Excerpt :

      Together, these results corroborate findings by Shebani and Pulvermüller (2013) showing that arms and legs motion impairs working memory for limb-related action words, while emphasizing the role of prior sensorimotor experience to specify impairment. In this regard, the present findings also confirm a previous study by Moreau (2012) demonstrating the influence of sensorimotor experience on spatial processing and reasoning, and lend support for the embodied approach of cognition which assumes an interrelation between conceptual and sensorimotor processes (see for example Costantini, Ambrosini, & Sinigaglia, 2012; Taylor & Zwaan, 2010; see also Barsalou, Simmons, Barbey, & Wilson, 2003, for a review). Taken together, these findings suggest that the ability to engage motor processes is experience-dependent: extensive motor practice induces subsequent motor processing in recall.

    View all citing articles on Scopus
    View full text