Canine Behavior: Genetics vs. Environment
The debate over nature vs. nurture as it applies to learning dates back over a hundred years. Certainly, during much of the 20th century, the distinction between learned and inherited behavior appeared much clearer than it does today. The concept that any type of behavior was either learned or merely developed without learning seemed a rationale and straightforward belief. Research based on these expectations caused some scientists to conclude that rat-killing behavior among cats, for example, is a learned behavior rather than an instinctive one, that human fears are all acquired, or that intelligence is completely the result of experience. Learning theorists were arguing at this point that most behavior is learned and that biological factors are of little or no importance. The behaviorist position that human behavior could be explained entirely in terms of reflexes, stimulus-response associations, and the effects of reinforcers upon them entirely excluding “mental” terms such as desires, goals and so forth was advanced by J.B. Watson in his 1914 book, Behavior:
An Introduction to Comparative Psychology. These early scientists routinely employed dogs in their research, with clear indications of learning behaviors taking place. However, the debate continues over just how much of this learning can be attributed to the environmental factors and how much relates to instinctual behaviors. This paper examines the relevant and scholarly literature concerning operant conditioning in general, and the extent to which it works with dogs in particular, followed by a summary of the research in the conclusion.
Review and Discussion
Background and Overview. There has been a unique bond between humans and dogs throughout the history that likely resulted because each species possessed certain characteristics that were equally beneficial to the survival of the other. Anyone who has ever owned a dog can attest to the powerful relationship that can develop between owner and canine. The answers to the “why?” Of the bond of dog ownership, for those who do not understand it, can be partly explained, at least in a clinical sense, by the long list of health factors that are associated with pet ownership and bonding. However, this relationship tends to be one-sided in many cases. “We have systematically, through seat-of-the-pants, applied genetics, been changing dogs. We have been modifying them to fit our immediate needs and even to fit our technology” (Dogs and People: The History and Psychology of a Relationship, 1996, p. 54). Human beings have played an instrumental role in creating dogs that fulfill specific needs. By applying the most primitive forms of genetic engineering, dogs have been bred to accentuate instincts that were evident from their earliest encounters with humans.
As noted previously, while details about the evolution of dogs remain unclear, the first dogs were hunters with keen senses of sight and smell; humans subsequently developed these instincts and created new breeds as need or desire arose over time (Dogs and People, 1996). Dogs were able to acquire food by scavenging the campsites of humans; humans could stay warmer in colder climates and have a live alarm system to warn them of predators by allowing the dogs into their dwellings. This close bond between the two eventually grew stronger, gradually maturing into a relationship that was much less utilitarian and more focused on higher-level needs, such as companionship and emotional security (Wendt, 1996).
According to “Dogs and People: The History and Psychology of a Relationship” (1996), “There is a long standing controversy as to where dogs come from. The current belief is that dogs started out originally as wolves. Early in our history we domesticated the wolf and that eventually became our pet dog” (p. 54). The study of such human-animal relations, known formally as anthrozoology, has seen an upsurge of interest in recent years, with researchers from many different academic fields contributing to creative studies involving the multiplicity of interactions between humans and dogs. Podberscek, Paul, and Serpell (2000) reviewed much of this work as it relates to relations between people and companion animals, and stated that the history of companionable human-animal interactions goes back to before the emergence of Homo sapiens as a distinct species. Human ancestors, Homo erectus, also lived in close relationship with the ancestors of modern dogs, Canine familiaris.
Further, long before the conquest of the Americas by Europeans, Amazonian Indians are known to have lived with dogs as companion animals (Schwartz, 1997). At the same time, in pre-modern Europe, people of all classes also lived with dogs as companion animals (Thurston, 1996). Unfortunately, all human-animal interactions have not been so amicable. Animals have been used and abused by humans in various ways, which almost all people consider unjust and inhumane (Wendt, 1996), and behavioral scientists continue to prefer dogs, among other species, for research purposes today (Gormezano, Prokasky & Thompson, 1987).
Operant Conditioning and Behaviorism. Instrumental, or operant conditioning, occurs when the presentation or withdrawal of a specific stimulus (i.e., reinforcer or punisher) is made contingent upon an organism’s specific response. Stimuli that tend to increase the rate or magnitude of a specified response (such as a food reward or shock termination) are defined as reinforcers.
By contrast, stimuli that decrease response rate or magnitude (e.g., presentation of loud noise or shock) are termed punishers (Ader, Baum & Weiner, 1988). In human operant conditioning studies in which feedback for making the specified response is the reinforcer, the procedure is known as biofeedback. While instrumental and classical conditioning are both forms of associative learning, instrumental conditioning differs procedurally in that reinforcement (e.g., presentation of the unconditioned stimulus or U.S.) is contingent upon the making of a response that is specified by the experimenter (Ader, Baum & Weiner, 1988).
Operant conditioning is the foundation on which B.F. Skinner explored behavior. According to Ellis (1999), Skinner was possibly the 20th century’s most influential psychologist, and developed “operant conditioning, which is the systematic use of positive reinforcement to modify behavior” (p. 34). A wide range of animal training, parenting techniques and incentive programs, with such common tactics as stock options, bonuses and sales commissions, were all influenced by Skinner’s concept (Ellis, 1999).
Skinner based his approach on Ivan Pavlov’s systematic experimentation with conditioning. Pavlov supervised several physiological laboratories of which the most prominent was the Imperial Institute of Experimental Medicine in St. Petersburg (Windholz, 1997). Between 1897 to 1936, at least 146 Pavlovians (graduate students and a few permanent coworkers) investigated animals’ brain functions. “The Pavlovians usually ex-perimented at Pavlov’s behest, for he rarely experimented himself, preferring to supervise each experimenter’s work. Breach of the specified experimental procedures evoked Pavlov’s anger” (p. 942). Pavlov was profoundly influenced by Darwin’s concept of the struggle for existence and its functionalistic implications.
In the 1890s, Pavlov speculated that the upper tract of animals’ digestive system responded functionally to specific foods, namely, a small salivary secretion to moist foods, such as meat, and a larger salivary secretion to dry foods, such as bread. Pavlov’s student, S.G. Vul’fson tested this hypothesis in 1897 at the Imperial Institute of Experimental Medicine (Windholz, 1997). In addition to finding experimental support for Pavlov’s hypothesis, Vul’fson also determined, without really trying to do so, that after eating the moist and dry foods, dogs that were “teased” from afar by these foods responded with a corresponding but diminished salivary secretion (Windholz, 1997).
Salivation to stimuli presented at a distance was viewed as being anomalous because, in view of the Cartesian conceptualization of reflexive action, an immediate contact between environmental agents and the organism’s sensory receptors was required. “Pavlov realized that although salivation was of minor physiological importance in the animal’s adjustment to the environment, it was, nevertheless, an observable response to stimuli” (Windholz, 1997, p. 943). In the initial Pavlovian salivary reflex conditioning experiments, the subjects used were restrained dogs; however, in the 1930s, Pavlov’s laboratory obtained two chimpanzees for similar experimentation.
Pavlov was critical of Wolfgang Khler’s scholarly explanation of problem solving, and successfully replicated Khler’s experiment in which an ape had to reach a suspended bait by building a tower that consisted of different sized boxes. Through observations of the ape’s behavior, Pavlov noted that it took several months for it to solve this problem; the ape was required to move the boxes directly under the bait, arrange the boxes in vertical order with the larger below the smaller, and test the construction’s stability by climbing on the uppermost box and moving back and forth. Pavlov called this process the “chaotic” method which he believed to be identical to the E.F. Thorndike trial and error method (Windholz, 1997).
In Pavlov’s view, any organism’s fundamental responses to the external and internal environments are innately determined movements, the unconditioned reflexes, which, if complex, Pavlov termed “instincts.” Pavlov subsequently identified several instincts, of which two were crucial to individuals’ ontogenetic survival: 1) the alimentary instinct (which prompted the organism’s food-seeking and eating behavior), 2) and the defensive instinct (which was aimed at avoiding noxious or harmful stimuli). “The sexual instinct enabled the phylogenetic continuity of the species” (Windholz, 1997, p. 946). When Pavlov subsequently observed the overall behaviors of animals in the laboratory, he identified significant individual differences such as temperaments that were comprised of three parameter characteristics of the nervous system: 1) the intensity or strength level of excitation and inhibition, 2) the interrelation between excitation and inhibition, and (3) the rapidity of the spread of these processes.
While permutations of these parameters could result in a variety of temperament types, Pavlov observed only four, which corresponded to Hippocrates’s humors: choleric, melancholic, sanguine, and phlegmatic (Windholz, 1997). In Pavlov’s view, the organism’s unconditioned reflexes (which are generated in the subcortical regions of the brain), are evoked by direct or immediate contact with the external environment.
By contrast, the conditioned reflexes (which are established in the cortical regions of the brain), are evoked by signals emanating from the external environment and comprise the first signal system. Since the environment is changing constantly, the first signal system tends to correct the organism’s adjustment by acquisition and extinction of reactions to signals that originate from the environment; however, because the human environment is mostly social, adjustment can be facilitated by language or by the second signal system (Windholz, 1997).
In 1938, B.F. Skinner identified two types of learning: 1) respondent and 2) operant conditioning (Klein & Mowrer, 1989). The term “respondent conditioning” refers to the learning situation that was investigated by Pavlov: “The conditioned and unconditioned stimuli are paired, and the conditioned stimulus subsequently elicits the conditioned response. In this type of learning, the animal or human passively responds to an environmental stimulus” (Klein & Mowrer, 1989, p. 11). Such respondent behaviors are thought to be mostly internal responses in the form of emotional and glandular reactions to stimuli; however, Skinner was primarily interested in operant conditioning. He maintained that in operant conditioning, an animal or human actively interacts with its environment in order to obtain a reward. “Operant behavior is not elicited by a stimulus: Rather, in anticipation of the consequences of the behavior, an animal or person voluntarily performs a specific behavior if that behavior has previously produced reinforcement” (Klein & Mowrer, 1989, p. 11). Skinner termed this relationship between behavior and reinforcement a “contingency”; the environment determines contingencies, and the animal or human must perform the appropriate behavior in order to obtain a reinforcer.
Edward L. Thorndike’s study of instrumental conditioning showed that animals can learn how to perform certain tasks that provided them with some type of reward (Brush, Overmier & Solomon, 1985). Thorndike’s methods were further refined by B.F. Skinner; the “Skinner Box” provided researchers with yet another tool for examining the relationship between stimulus and operant conditioning. Similar to classical conditioning, operant conditioning examines the response of the learner following a stimulus; however, here, the response is voluntary and the concept of reinforcement is emphasized.
From the 1930s to the late 1960s, behavior theorists developed global theories of learning; that is, theories that attempted to explain all aspects of the learning process. While this approach provided much information about learning, more contemporary theorists investigate and describe specific aspects of the learning process (Klein & Mowrer, 1989). John B. Watson was an American psychologist who was responsible for codifying and publicizing behaviorism. From Watson’s perspective, behaviorism was an approach to psychology that was restricted to the objective, experimental study of the relations between environmental events and human behavior.
According to Klein and Mowrer (1989), behaviorism is a school of psychology that emphasizes the role of experience in governing human behavior. According to behaviorists, organisms possess instinctual motives; however, the important determinants of behavior are learned. “Acquired drives typically motivate us; our actions in response to these motives are also learned through our interaction with the environment” (Klein & Mowrer, 1989, p. 4). For instance, a behaviorist would assume that an individual’s motivation to attend school is a learned one, and that your behaviors while attending school are also learned. One main goal of the behaviorist, then, was the delineation of the laws governing learning, an issue that has dominated academic psychology for generations. For much of the 20th century, behaviorists proposed a number of global explanations of learning; that is, a single view that could explain the learning of all behaviors (Klein & Mowrer, 1989).
Tolman and Purposive Behaviorism. The approach to understanding behavior was first used by Edward Chace Tolman; purposive behaviorism involved a synthesis of Gestalt psychology and behaviorism that focused on an entire, goal-directed action, including both muscular responses and the cognitive processes that tend to control and direct them. Tolman was the first to selectively breed rats for high and low maze-solving abilities (Tolman, 2002). According to Gannt (2004), learning is a concept as opposed to being a thing, and the activity called learning is inferred only through behavioral symptoms. Gannt points out that the distinction implicit between behavior and inferred process was one of Tolman’s major contributions and served to reconcile the influential views that might otherwise seem completely at odds.
Intervening Variables. Tolman referred to himself a behaviorist and was ostensibly constrained by the insistence of Watson on objectivity; however, Tolman was also interested in thinking, expectancy, and consciousness, and maintained that learning is inexorably produced by such independent variables as the organism’s previous training and physiological condition and by the response the environment requires (Gantt, 2004). According to Tolman’s perspective, the development of learning is revealed through the changing probability that given behavior (in this case, the dependent variable) will result. Tolman maintained that learning itself is not directly observable; rather, it is an intervening variable, one that is inferred as a connecting process between antecedent (independent) variables and consequent (dependent) behavior (Gantt, 2004).
Research to date suggests that intervening variables may have discoverable physiological bases. Meehl and MacCorquodale proposed a distinction between the abstractions advocated by some and the physiological mechanisms sought by others. Meehl and MacCorquodale recommended using the term intervening variable for the abstraction and hypothetical construct for the physiological foundation; for instance, Hull treated habit strength as an intervening variable, defining it as an abstract mathematical function of the number of times a given response is rewarded (Gannt, 2004). By contrast, Edward L.
Thorndike viewed learning as being a hypothetical construct, and suggested a physiological mechanism involving the improved conduction of nerve impulses. Intervening variables, however, and hypothetical constructs need not be incompatible; Thorndike’s hypothetical neural process could empirically be found to be the mechanism through which Hull’s abstraction operates. Finally, with the growing recognition of the complexity of learning, the grand theories of Guthrie, Hull, and Tolman have largely been abandoned except as historic landmarks (Gannt, 2004)
Schedules of Reinforcement.
Shaver and Tarpy say that Pavlov’s conditioning experiments established the relationship between behavior and reward. Tolman was among a number of researchers who attempted to determine if an organism could learn an instrumental response absent a reward: “In a classic study intended to demonstrate latent learning, rats were allowed to run through a maze with fourteen choice points (Tolman & Honzik, 1930).
Animals in one group always received a reward when they arrived at the goal box. Animals in a second group never received a reward; they were taken out of the maze after a fixed period of time and later fed in their home cage” (Shaver & Tarpy, 1993, p. 313). However, the behavior of the third group of rats confused the researchers. Shaver and Tarpy write that “These subjects did not receive a reward on the first 10 trials, but starting on the eleventh trial they were given a reward when they reached the goal box. The change resulted in a dramatic and sudden decline in the mean number of errors. The suddenness of the decline in errors is important, because it suggests that reinforcement was not necessary for the learning to take place” (Shaver & Tarpy, 1993, p. 313).
This means that if the animals had not learned anything during the first 10 trials, they should have shown a gradual reduction in mean number of errors beginning with trial 11; therefore, the only way to account for their dramatic improvement is to suggest that the rats had in fact learned how to find the goal box during the first 10 trials but simply had not performed in a manner that indicated they had learned anything at all (Shaver & Tarpy, 1993).
Perhaps this is the most intriguing, and problematic aspect of research with any type of animal. According to Klein and Mowrer (1989), “It should be noted that a contingency is not necessarily a perfect relationship between behavior and reinforcement. An individual may have to exhibit several or many responses in order to gain reinforcement” (p. 196). In order to be able to effectively respond, for example, the animal or person must be sensitive to the degree of relationship between behavior and reinforcement. Skinner (1938) concept of schedule of reinforcement was based on the observation that contingencies differed and that animals sensitive to the exact contingency behave differently when confronted with different contingencies. “Animals and people can also recognize that there is no relationship between behavior and reinforcement. This situation can lead to feelings of helplessness and depression” (Klein & Mowrer, 1989, p. 11).
Avoidance Training for Dogs. Instructional strategies for teaching various learning outcomes with dogs include shaping, fading, chaining (Driscoll, 1994) and punishment (Hayes & Rehfeldt, 1998). Shaping is used in some applications in order to teach relatively simple tasks by breaking the task down into small components (Driscoll, 1994). The shaping approach uses reinforcement that is delivered all throughout the steps; by contrast, chaining employs reinforcement that is delayed until the end and the learner can demonstrate the task in its entirety (Driscoll, 1994). Driscoll points out that the use of practice and shaping in instruction has its roots in behaviorism. The sequencing of practice from easy to complex together with the use of prompts are both strategies Skinner used in his research of operant conditioning. These techniques employ successive approximations that are reinforced until the goal has been reached (Driscoll, 1994).
Attempts have been made to inhibit the elicitation of conditioned emotional responses in conditioned suppression and avoidance preparations, assuming that such responses occur in the autonomic nervous system, using autonomic blocking agents, CNS tissue ablation, and drugs. These approaches have resulted in a wide range of deviations from stereotypical operant response patterns; however, operant responses have not been noted to cease altogether in such preparations, as would be expected if respondently conditioned emotional responses did, in fact, mediate operant responding (Hayes & Rehfelt, 1998). Further, although Shapiro (1961) determined that elicited salivation in dogs was consistently preceded discriminated lever pressing on differential reinforcement of low-rate (DRL) schedules, hence supporting the mediational role of the respondent, Williams (1965) found lever pressing prior to salivation on fixed ratio (FR) schedules, thereby controverting such a mediational role. Furthermore, the finding that avoidance responding can be reliably maintained in the absence of a prior stimulus further challenged mediational accounts of avoidance and other operant behavior (Hayes & Rehfeldt, 1998).
According to Brush, Overmier, and Solomon (1985), the link between avoidance training and punishment has appeared to be a natural one; however, operant conditioning theorists have converged on the definition of punishment as passive avoidance training. “By definition, an avoidance training procedure is one in which an aversive stimulus follows all responses except some particular well-defined response; a punishment training procedure is one in which an aversive stimulus follows only some particular well-defined response” (p. 10). In these cases, the task in an avoidance training procedure is to learn what to do; the task in a punishment training procedure is to learn what not to do. Based on such a close relationship between avoidance training and punishment, it seemed reasonable to believe that a common theory might account for both.
Solomon’s early research at Harvard University on traumatic-avoidance learning processes formed the basis for his research on punishment. He determined that traumatic-avoidance learning was highly resistant to extinction. Under ordinary extinction procedures, in which no further shock would occur regardless of the dog’s behavior, many of the animals in Solomon’s experiments continued to respond indefinitely. A variety of special techniques were tested that might serve to speed up the extinction process; one of them was the punishment-extinction procedure in which shock occurred only if the animal jumped across the barrier in the presence of the conditioned stimulus (CS) (Brush, Overmier & Solomon, 1985).
This procedure resulted in some confusing findings. Many of the dogs continued to respond when each response was punished, and they usually increased their speed of response (Solomon, Kamin, & Wynne, 1953). It became clear to Solomon that the failure to stop responding was not attributable to a lack of knowledge of the relevant contingencies; before executing the jump, for instance, the dog frequently made vocalizations that indicated that it “knew” the punishment was about to occur, but it did not inhibit the response. This was one piece of “evidence” that Solomon used to conclude that it is impossible to accurately predict behavior of an animal on the basis of what the animal “knows.” In order to understand animal behavior, therefore, it is essential to have a theory of learning and motivation as well as a theory of cognition.
Another punishment experiment involving dogs that was underway in the middle 1950’s was known in the laboratory as the “superego” experiment (Black, Solomon, & Whiting, 1954). The major qualitative observations were as follows. Young dogs lacking extensive experience with people were allowed to eat a small amount of ordinary chow or a larger amount of preferred meat. “When it was clear that the dogs would eat the preferred food first, punishment training began. The experimenter would rap the dogs on the nose with a rolled up newspaper either just before or just after they started to eat the meat” (Brush et al., 1985, p. 10). In either event, following a very few of such punishment episodes, the dogs ate the chow, but not the meat because it was then regarded as being “taboo.” Thereafter, the situation continued as before, except there was no experimenter present. In these cases, the dogs would eat the chow and leave the meat, and this continued day after day, even under a level of food intake that was insufficient to sustain the animals; eventually, though, the dogs broke the taboo and went ahead and ate the meat.
According to Brush et al. (1985), the major observation of this study was that the post-ingestional behavior of the dogs was a function of the timing of the punishment. “Only those previously punished just after starting to eat the meat displayed a guilt reaction when they broke the taboo-they tucked their tails and hid in the corner!” (Brush et al., 1985, p. 11). This was an important observation; however, it was never published and probably cannot yet be properly appreciated, because researchers do not have a context in which to understand it (Brush et al., 1985).
Solomon, Turner, and Lessac (1968) conducted a study of delay of punishment and resistance to temptation in dogs. The authors presented what they call “a complicated theory of conscience.” (p. 234). These authors assumed that the dogs, even with long delays of punishment, would quickly “know” which types of food would result in punishment administered by the experimenter. This is a sort of cognitive learning that can span long temporal delays. According to Solomon et al. (1968), “The dogs know what they are not supposed to eat! However, when the experimenter is missing, and the dogs are faced with an uncertainty or a change in the controlling stimulus situation, the authors’ argument is that cognition is not enough” (1968, p. 234). In this case, the hungry dogs can no longer be certain that eating the forbidden horsemeat will result in punishment, because the experimenter is gone; it is also under these conditions of changed social stimulation that the authors believe the conditioned emotional reactions of the dogs “take over” (p. 237). Clearly, whenever dogs, or any animals – including humans – are subjected to these types of arbitrary reinforcement schedules, there are going to be mixed results from study to study, but the bottom line adverse impact on the test subjects remains relatively unchanged.
The research showed that the bond between dogs and mankind extends back into the mists of time, but that this relationship has not always been mutually beneficial. Dogs have been and continue to be favorite laboratory test animals for many scientists, causing much backlash from animal rights activists. The value of the findings of the research conducted using dogs to date has also been questionable, with the results of such studies nevertheless showing that dogs are much like people when it comes to the relationship between learning and reinforcement. Both dogs and humans are able to recognize that in some cases, there is simply no relationship between behavior and reinforcement, a situation that has been shown to lead to feelings of helplessness and depression. Despite these constraints, the research also showed that operant conditioning has been used with great success in some settings, leading researchers to believe that there is more to the nurture than there is to the nature in this debate.
Ader, R., Baum, A., & Weiner, H. (1988). Experimental foundations of behavioral medicines: Conditioning approaches. Hillsdale, NJ: Lawrence Erlbaum Associates.
Black, A.H., Solomon, R.L., & Whiting, J.W.M. (1954, April). Resistance to temptation as a function of antecedent dependency relationships in puppies. Paper presented at the Eastern Psychological Association meeting, New York. In American Psychologist, 9, 579.
Brush, F.R., Overmier, J.B., & Solomon, R.L. (1985). Affect, conditioning, and cognition: Essays on the determinants of behavior. Hillsdale, NJ: Lawrence Erlbaum Associates.
Dogs and People: The History and Psychology of a Relationship. (1996). Journal of Business Administration and Policy Analysis, 24-26, 54.
Driscoll, M.P. (1994). Psychology of learning for instruction. Boston: Allyn and Bacon.
Ellis, A. (1999). Best of the Century. Psychology Today, 32(6), 34.
Gannt, Herbert G. (2004). Behaviorism and John B. Watson in Encyclopedia Britannica.com [premium service].
Gormezano, I., Prokasy, W.F. & Thompson, R.F. (Eds). (1987). Classical conditioning. Hillsdale, NJ: Lawrence Erlbaum Associates.
Hayes, L.J. & Rehfeldt, R.A. (1998). The Operant-Respondent Distinction Revisited: Toward an Understanding of Stimulus Equivalence. The Psychological Record, 48(2), 187.
Klein, S.B. & Mowrer, R.R. (1989). Contemporary learning theories. Hillsdale, NJ: Lawrence Erlbaum Associates.
Podberscek, A.L., Paul, E.S., & Serpell, J.A. (Eds.). (2000). Companion animals and us: Exploring the relationships between people and pets. Cambridge: Cambridge University Press.
Schwartz, M. (1997). A history of dogs in the early Americas. New Haven, CT: Yale University Press.
Shaver, K.G. & Tarpy, R.M. (1993). Psychology. New York: Macmillan Publishing Co.
Solomon, R.L., Kamin, L.J., & Wynne, L.C. (1953). Traumatic avoidance learning: The outcomes of several extinction procedures with dogs. Journal of Abnormal and Social Psychology, 48, 291-302.
Solomon, R.L., Turner, L.H., & Lessac, M.S. (1968). Some effects of delay of punishment on resistance to temptation in dogs. Journal of Personality and Social Psychology, 8, 233- 238.
Thurston, M.E. (1996). The lost history of the canine race: Our 15,000-year love affair with dogs. Kansas City, MO: Andrews and McMeel.
Tolman, C.E. (2002). The Columbia Encyclopedia, Sixth Edition. [Online]. Available: http://www.bartleby.com/65/to/Tolman-E.html.
Wendt, L.M. (1996). Dogs: A historical journey: The human/dog connection through the centuries. New York: Howell Book House.
Windholz, G. (1997). Ivan P. Pavlov: An Overview of His Life and Psychological Work. American Psychologist, 52(9), 942.