Concepts of Psychology
Theories of Learning
There are different theories to how people learn and interpret the world around them. According to Gestalt psychologists, “people see patterns in stimuli before them and conjoin isolated events into meaningful structures” (Gilbert, 1998, p. 67). In order to learn, we use proximity and equality. Proximity involves using “location information to infer relation” and for equality, “elements are perceived as belonging together if they resemble each other in form” (Gilbert, 1998, p. 67). Gestalt developed the Pragnanz principle, which suggests we organize our perceptual fields into wholes, helping to bring about “meaningful structure, balance, and completeness” (Gilbert, 1998, p. 67).
As for behaviorists, their theory of learning claims that we are born with a blank slate that is molded by experience. Learning is a process of forming connections between stimuli and responses. This theory rests on the idea that our “motivation to learn was assumed to be driven primarily by drives, such as hunger, and the availability of external forces, such as rewards and punishments” (Bransford, 2000, p. 6). Rewards increase the strength of connections between stimuli and responses, as suggested behavioral psychologist Thorndike. Other notable behaviorists were Skinner, who developed the idea of operant conditioning, and Watson, the father of behaviorism.
Lastly, in Social Learning Theory, people learn by observing one another. Albert Bandura developed Social Learning Theory, which suggest people use modeling and imitation to learn through social context (Ormond, 1999). This theory began with roots in behaviorism and the role of stimuli and response, but grew to include the role of cognitive functions in learning. It includes ” people’s interpretations of what they see, their expectations regarding future events, and their beliefs about their ability to successfully accomplish challenging tasks” (Weiner, 2010, p. 1632). In this theory, the person controls their own learning, not the environment they are in.
To find out what the most popular flavor of ice cream is, a research study must be designed that defines a particular target. In this example, the study will target undergraduates at a given university in the United States. Since it is unreasonable to assume data can be collected from all undergraduate students, a sample from this population will be chosen to take the survey. A database of all undergraduate students attending the university will number each individual and a random number generator will be used to compile a representative sample. Where necessary, weighing figures such as race or gender must take place to bring the data of the representative sample closer to actual proportions in the given population (“How do you spend your time?”).
After acquiring a random sample of participants from the undergraduate population at the university, a survey questionnaire would be mailed out via e-mail to them. The survey would be anonymous and would ask them to rank their favorite flavors of ice cream from a list of common flavors, with a write-in for those who do not see their flavor of choice on the questionnaire. They would return the questionnaire and the data from the sample participants would be collected and recorded and the tallied results would identify the favored flavors.
Questionnaires and surveys as a method of observation in this study work best because they can produce a great deal of information from a larger sample. If a given university has 20,000 undergraduates, a sample of around 5,000 participants could achieve accurate flavor preference generalized for the entire undergraduate population. In order to better understand the sample size needed for conducting this survey, a confidence interval should be calculated. The confidence interval will tell how good of an estimate the sample is for finding the true preferences of the population. A 90 or 95% confidence interval is what is most often used (“How do you spend your time?). Then the sample findings can be generalized for the population.
Neurons, along with nerve and glial cells, make up the nervous system. Neurons are individual cell fibers comprised of the soma, or cell body, dendrites, and the axon (Wilmore, 2008, p. 80). Dendrites extend off of the surface of the cell body of the neuron while the axon “conducts nerve impulses to other neurons or to muscle cells” (Brodal, 2010, p. 5).
The neuron has many dendrites and it is here where impulses, or action potentials, come into the neuron “from sensory stimuli or from adjacent neurons” and they are carried to the soma (Wilmore, 2008, p. 80). On the other hand, the axon carries impulses away from the cell body. The axon is surrounded by a myelin sheath, which increases the speed of impulse transmission (Brodal, 2010, p. 5).
The axon branches out at its end, with the tips of these branches referred to as axon terminals. The tips are filled with chemicals known as neurotransmitters (Wilmore, 2008, p. 80). A gap exists between the axon terminal and the dendrite of another neuron called the synaptic cleft. Neurochemicals are transmitted through the synapse to the next cell, continuing the relay of the message. The end goal of this process is the transmission of the nerve impulse, an electrical charge, to an “end organ, such as a group of muscle fibers, or back to the CNS” (Wilmore, 2008, p. 80).
In order for visual images to be transmitted to the brain, light must first pass through the cornea that covers the surface of the eye. The opening of the eye that allows the light to pass through is called the pupil and is surrounded by the muscle referred to as the iris. The pupil is adjustable and its size changes depending on the amount of light coming in. Upon entering through the pupil, the light hits the lens. The lens is capable of adjusting its thickness and, in doing so, projects a clear image onto the retina (Rathus, 2010, p. 131). The lens then focuses the light to the back of the eye on sensory receptors. The process of the lens changing its shape to focus light is called accommodation and is determined by the distance between the lens and the object in view (Nairne, 2009, p. 138). The light then will reach the retina, a layer of tissue covering the back of the eye. Here, “the electromagnetic energy gets translated into the inner language of the brain” (Nairne, 2009, p. 139).
Within the retina are light sensitive receptors that transform the light into electrochemical impulses. These are called rods and cones and there are about 126 billion within the eye. Cones are less sensitive to light and are responsible for color in our vision. Rods are sensitive to the intensity of light and are responsible for peripheral vision. When the light reacts with photopigment within the receptors, a chemical reaction occurs that leads to the creation of a neural impulse in the brain (Nairne, 2009, p. 139).
We have two binocular cues to help us to determine whether an object is nearer or further from another, thus giving us an idea of depth. These two cues are retinal disparity and convergence. Because our eyes are a few inches apart, each eye receives a slightly different image when viewing things. The difference in these two different images is interpreted by the brain and gives cues as to the relative distance between objects we see. This is retinal disparity (Nevid, 2009, p. 120).
As for convergence, it allows us to see one singular image even when each eye brings in a different view. In convergence, the eyes are looking inward to create one single image. Convergence is dependent upon “the muscular tension produced by turning both eyes inward to form a single image” (Nevid, 2009, p. 120). More tension is created the closer the object being viewed is. This tension is the brain’s cue to determine depth perception.
Continuous and partial reinforcement are both techniques used to solicit a certain response. With continuous reinforcement, every occurrence of a specified response is reinforced. When a particular response is reinforced only sometimes with a reinforcer, it is referred to as partial reinforcement. Partial reinforcement is more resistant to extinction, as exhibited through Skinner’s experiment with pigeons and pecking for food. Skinner conditioned pigeons to peck on a disk, which triggered the dispensing of food. A pigeon trained though continuous conditioning “would continue pecking at the disk 100 times or so before the behavior decreased significantly, indicating extinction” (Hockenbury, 2003, p. 218). Those trained with partial reinforcement, however, would peck at the disk thousands of times, indicating they believed reinforcement might still occur even given the delay. This is an example of the partial reinforcement effect.
Skinner also developed the idea of schedules of reinforcement. Using a preset pattern based on either the number of responses or a time interval between responses, a reinforcer would be delivered (Hockenbury, 2003, p. 218). There are fixed-ratio schedules, variable-ratio schedules, fixed-interval schedules, and variable-interval schedules. In a fixed-ratio schedule, a reinforcer will be delivered after a fixed number of responses. In this type of schedule, shaping is key to get the participant to respond. It may be necessary to start with continuous conditioning and gradually increase the fixed number of responses necessary for a reinforcer to be delivered. The nature of this schedule “produces a high rate of responding, with a pause after the reinforcer is delivered” (Hockenbury, 2003, p. 219), and then another burst of responses.
With a variable-ratio schedule, responses follow a steady pattern, with few pauses after the reinforcer is delivered. Here, reinforcement follows an average number of responses that is varied between trials (Hockenbury, 2003, p. 219). A participant may need to respond 25 times in one trial to receive reinforcement, whereas the second trial will require 20 responses for the delivered reinforcer. While each trial is unpredictable, more trials bring the ratio of response to reinforcement to a predetermined average (Hockenbury, 2003, p. 219).
Interval schedules use time to determine the delivery of the reinforcer. With a fixed-interval schedule, reinforcement comes after a fixed time interval elapses after the first response. Using this schedule produces a pattern of increased responses as time for the next reinforcer gets closer. Once the reinforcer is delivered, responses tend to drop off until the end of the fixed interval again draws near (Hockenbury, 2003, p. 219).
Lastly, on a variable-interval schedule, a reinforcer is delivered after a certain amount of time after the first response, yet this time is an average and varies from trial to trial. This schedule is unpredictable, yet more trials produce an average time interval. The pattern formed from this type of schedule is a moderate and steady rate of responses. Since reinforcement is unpredictable due to the varied time intervals, the belief that the delivery of a reinforcer might occur produces a steady rate of responses (Hockenbury, 2003, p. 219).
Bransford, J. (2000). How people learn: Brain, mind, experience, and school. Washington, D.C.:
National Academy Press.
Brodal, P. (2010). The central nervous system: Structure and function (4th ed.). New York, NY:
Gilbert, D.T., Fiske, S.T., & Lindzey, G. (1998). Handbook of social psychology: Volume 1 (4th
ed.). New York, NY: Oxford University Press.
Hockenbury, D., & Hockenbury, S. (2003). Psychology. New York, NY: Worth.
“How do you spend your time?” EHP Science Education Program. Retrieved from ehponline.org/education.
Nairne, J.S. (2009). Psychology (5th ed.). Belmont, CA: Thompson Wadsworth.
Nevid, J.S. (2009). Psychology: Concepts and applications. Boston, MA: Houghton Mifflin.
Ormond, J.E. (1999). Human learning (3rd ed.). Upper Saddle River, NJ: Prentice-Hall.
Rathus, S.A. (2010). Psychology: Concepts and connections, brief version. Belmont, CA:
Weiner, W.E.C. (2010). The corsini encyclopedia of psychology: Volume 2. Hoboken, NJ: John
Wiley & Sons.
Wilmore, J.H., Costill, D.L., & Kenney, W.L. (2008). Physiology of sports and exercise (4th
ed.). Champaigne, IL: Human Kinetics.