Origins and Import of Reinforcing Self-Stimulation of the Brain


The phenomenon of self-stimulation of the brain was discovered in 1954. This was one of the more important discoveries in behavioral neuroscience and psychology. The present article examines the origins of the phenomenon, describing some of the distal origins briefly, since each has a long history of its own and concentrates on the more proximal origins, including experiments preceding and closely related to the phenomenon and includes an account of the relevant knowledge at that time. A brief description of the events leading to Olds’ crucial, if serendipitous, experiment is retold, followed by descriptions of some early experiments that preceded the later voluminous body of work based on it. Finally, some views are expressed about the import of the finding.

Keywords brain, self-stimulation, pleasure center, reinforcement, Olds, Milner history



In 1954, James Olds and Peter Milner made the startling discovery that a rat would press a lever to deliver electrical stimulation to its own brain, indicating that electrical stimulation of the brain may act as a reward or reinforcement. Although this finding may have been serendipitous, some investigators consider this to be one of the most important discoveries made in the field of neuroscience (Thompson and Robinson, 1979).

This view may be justified by the enormous amount of research the finding has generated on the nervous system and behavior. This work is summarized in a number of reviews and collections of articles on self-stimulation (Hoebel, 1988; Hoebel and Novin, 1982; Mogenson; 1977: J. Olds, 1977; Rolls, 1975; Stellar and Stellar, 1985; Valenstein, 1973; Wauqier and Rolls, 1976; White and Franklin, 1989; and Wise, 1998).

No attempt here will be made to review the large body of work on self-stimulation. The present work is a more limited attempt to look back at the beginnings of self-stimulation with some historical perspective. It is concerned with the background upon which self-stimulation appeared, at some of the events leading to its origin, and at some of the early experiments that had relevance for future research trends. It will be highly selective and at the end express some views about the import of the self-stimulation phenomenon.

The author presented an abstract on this subject at the History of Neuroscience poster sessions at the 1993 Society for Neuroscience meeting. I thank Stanley Finger for his recommendations and advice. A special debt of gratitude is due to Duane Haines for his helpful suggestions and for technical assistance.

Address correspondence to Henry J. de Haan, 5403 Yorkshire Street, Springfield, VA 22151-1203. Tel.: 703-978-9065. E-mail:


Prior Research and Theory

Much of the work to be described involves placing an electrode in the brain by means of a stereotaxic instrument and using the coordinates in a brain atlas. Each of these techniques has a long history of its own and consequently will not be covered here. For a more extensive prospectus on brain stimulation see Valenstein (1973). During the 1930s, considerable use was made of the stereotaxic instrument by Stephen Ranson and his colleagues at Northwestern University (e.g., Brobeck, Hetherington, and Magoun) to explore the brainstem and hypothalamus. Areas of the hypothalamus were identified for the regulation of food and water intake, regulation of temperature, and regulation of sleep and wakefulness.

Hetherington and Ranson (1942) found that bilateral lesions of the hypothalamus of the albino rat resulted in obesity. Brobeck, Tepperman, and Long (1943) showed that this obesity was accounted for by excess ingestion of food. They named this phenomenon “hypothalamic hyperphagia.” Anand and Brobeck (1951) found that lesions in the lateral hypothalamus would produce aphagia and often death by starvation. They suggested that the lateral hypothalamus should be designated the “feeding center” and that the ventromedial hypothalamus should be designated the “satiety center.”

Magoun was another investigator associated with Ranson who later made fundamental discoveries concerning sleep and wakefulness in mammals. Moruzzi and Magoun (1949) found that electrical stimulation of the brainstem reticular formation and the basal diencephalon would arouse a sleeping cat. At the same time electroencephalic recordings of the cat would show a change from the slow waves characteristic of sleep to the fast-wave activity characteristic of an awake, alert animal. They called this area of the brainstem the “reticular activating system.” In the next few years, additional facts about this system were established. Many of the preceding investigations were based on the use of electrodes for producing lesions or for stimulating the brains of lightly anesthetized animals.

Meanwhile, Hess in Switzerland had been using chronic depth electrodes to investigate the diencephalon of the cat since 1928. With these electrodes, experiments could be done with relatively unrestrained animals that were fully awake (Hess, 1954). Many of Hess’ experiments evoked autonomic responses; however, some of his experiments evoked behavioral responses such as eating or rage reactions. He pointed out that rage reactions in his cats were not “sham rage” but were fully developed rage reactions. Hess visited Harvard University in 1952 and introduced his techniques to American investigators.

Some of the origins of self-stimulation include the physiology of motivation and contributions from the experimental psychology of learning, particularly the theories of Clark L. Hull (1943) and Burrhus F. Skinner (1938). Historically, one of the fundamental concepts in the physiology of motivation, and many other psychological categories as well, is the concept of homeostasis, the dynamic equilibrium maintaining the constancy of the internal environment (Cannon, 1932). Nevertheless, the local-factors concept of hunger and thirst, attributing them to stomach contractions and dryness of the throat, were criticized by many as being too simplistic. The concept of the “central motive state” was an attempt by Clifford T. Morgan (1943) to more adequately depict the nature of motivation.

The classic paper by Eliot Stellar (1954) on the physiology of motivation was a more complete physiological theory of motivation developed by amplifying and expending the earlier views of Morgan. This theory held that the hypothalamus was the main nexus within the limbic system for motivated behavior and that there were both excitatory and inhibitory mechanisms within the hypothalamus for all motivated behaviors. It viewed the hypothalamus as the main integrating mechanism for motivation and as part of a brain circuit by means of which a number of factors could exert an influence. Among these factors were internal chemical and physical factors, cortical and thalamic factors having both excitatory and inhibitory influences, and sensory stimuli, both learned and unlearned, including feedback from comsummatory behavior.

The concept of motivation was thus one of multiple hypothalamic drive systems with some localization of function within the hypothalamus. It was held that there were sites within the hypothalamus for many drives, including hunger, thirst, sex, sleep, activity, temperature, etc. As part of the evidence for his theory, Stellar (1954) described the physiological basis for each of these drives. More information was probably available on the physiological bias of hunger at that time than for any other drive. Because of the important role of hunger in the self-stimulation literature, the experimental findings will be summarized. The lateral hypothalamus is the “hunger” or “appetite” center. It was given the name by Anand and Brobeck (1951) who found that lesions in the lateral hypothalamus produced aphagia. This was confirmed by electrical-stimulation studies that produced eating behavior (Delgado and Anand, 1953). Lesions of the ventromedial hypothalamus had earlier been found to result in obesity (Hetherington and Ranson, 1942). The cause of the obesity was found to be overeating that was referred to as “hypothalamic hyperphagia” (Brobeck, Tepperman, and Long, 1943). The ventromedial nucleus of the hypothalamus was later called the “satiety center” and it supposedly had an inhibitory influence on the (excitatory) “hunger center” (Anand and Brobeck, 1951). This exemplifies the 1950s perspective on the physiological basis for one drive. It included the concept of both excitatory and inhibitory centers within the hypothalamus subject to the influence of internal factors (such as nutrients in the circulatory system), external factors (feedback regulatory behavior), and influences from higher brain centers.

Contributions to the origin of self-stimulation also came from behavioral psychology. They included the concept of operant conditioning and the methodology of its measurement (Skinner, 1938). The techniques included rate of response measures, response-recording technology, and the Skinner box. Olds was familiar with all of this from his graduate work at Harvard. To many behavioral psychologists of the 1940s, drive reduction was an important concept. Hull (1943) held that drive reduction was essential for learning. Food is frequently used as a reward or reinforcement for learning in animals; in other learning situations involving electric shock, escape from pain or fear is said to be reinforcing. Neal Miller was interested in determining whether electrical brain stimulation can function in the same way as peripheral stimulation. He and his colleagues (Delgado, Roberts, and Miller, 1954) established that brain stimulation can be an aversive stimulus and function as punishment to reinforce learning (i.e., animals will learn to escape from brain stimulation).

The seeds for the origin of self-stimulation can be seen in all of the above-mentioned facts, theories, and events. These included knowledge of electrical stimulation of the brain, the physiological facts, theories of motivation, and drive resulting from lesions and stimulation of brain structures. In addition, the stage was set by Hess’ work on behavioral reactions of animals to diencephalic stimulation, as well as development of techniques for stimulation by chronic depth electrodes in relatively free-ranging animals, and concepts and methods from behavioral psychology. Finally, Miller’s experiment showing that brain stimulation can motivate escape learning was also a crucial finding that would have influenced Olds at the time of discovery.


Discovery of Self-Stimulation

The story of the initial experiment that led to the notion of reinforcing self-stimulation is familiar to many neuroscientists and has been told by both J. Olds (1955, 1956a, 1956b, 1977; J. Olds and Milner, 1954) and Milner (1989). The following summary is based on the latter account. After obtaining a Ph.D. in social psychology from Harvard in 1952, Olds, intrigued by Hebb’s book The Organization of Behavior and Neuropsychological Theory (1949), went to Hebb’s department at McGill University on a postdoctoral fellowship. There he met Milner who was conducting research for his Ph.D. dissertation using freely moving rats with chronic electrodes in the brainstem reticular formation.

In Olds’ own words (1956a, pp. 4-5):

“We were not at first concerned to hit very specific points in the brain, and in fact in our early tests the electrodes did not always go to the particular areas in the mid-line systems at which they were aimed. Our lack of aim turned out to be a fortunate happening for us. In one animal the electrode missed its target and landed not in the mid-brain reticular system but in a nerve pathway from the rhinencephalon. This led to an unexpected discovery.

In the test experiment we were using, the animal was placed in a large box with corners labeled A, B, C and D. Whenever the animal went to corner A, its brain was given a mild electric shock by the experimenter. When the test was performed on the animal with the electrode in the rhinencephalic nerve, it kept returning to corner A. After several such returns on the first day, it finally went to a different place and fell asleep. The next day, however, it seemed even more interested in corner A.

At this point we assumed that the stimulus must provoke curiosity; we did not yet think of it as a reward. Further experimentation on the same animal soon indicated, to our surprise, that its response to the stimulus was more than curiosity. On the second day, after the animal had acquired the habit of returning to corner A to be stimulated, we began trying to draw it away to corner B, giving it an electric shock whenever it took a step in that direction. Within a matter of five minutes the animal was in corner B. After this, the animal could be directed to almost any spot in the box at the will of the experimenter. Every step in the right direction was paid with a small shock; on arrival at the appointed place the animal received a longer series of shocks.

Next the animal was put on a T-shaped platform and stimulated if it turned right at the crossing of the T but not if it turned left. It soon learned to turn right every time. At this point we reversed the procedure, and the animal had to turn left in order to get a shock. With some guidance from the experimenter it eventually switched from the right to the left. We followed up with a test of the animal’s response when it was hungry. Food was withheld for 24 hours. Then the animal was placed in a T both arms of which were baited with mash. The animal would receive the electric stimulus at a point halfway down the right arm. It learned to go there, and it always stopped at this point, never going on to the food at all!

After confirming this powerful effect of stimulation of brain areas by experiments with a series of animals, we set out to map the places in the brain where such an effect could be obtained. We wanted to measure the strength of the effect in each place. Here Skinner’s technique provided the means. By putting the animal in the “do-it-yourself’ situation (i.e., pressing a lever to stimulate its own brain) we could translate the animal’s strength of “desire” into response frequency, which can be seen and measured.

The first animal in the Skinner box ended all doubts in our minds that electric stimulation applied to some parts of the brain could indeed provide reward for behavior. The test displayed the phenomenon in bold relief where anyone who wanted to look could see it. Left to itself in the apparatus, the animal (after about two to five minutes of learning) stimulated its own brain regularly about once every five seconds, taking a stimulus of a second or so every time. After 30 minutes the experimenter turned off the current, so that the animal’s pressing of the lever no longer stimulated the brain. Under these conditions the animal pressed it about seven times and then went to sleep. We found that the test was repeatable as often as we cared to apply it. When the current was turned on and the animal was given one shock as an hors d’oeuvre, it would begin stimulating its brain again.

When the electricity was turned off, it would try a few times and then go to sleep.”

Modifying the implantation procedure using the method of Hess (1954), more rats were stimulated (see Figure 1). Olds immediately concluded that the experiment refuted the drive-reduction theory of reinforcement. He also referred to the region of reinforcement as the “pleasure center”; although he was behaviorist enough not to be fooled by this language (Milner, 1989). Evidence was obtained that self-stimulation was indeed a motivated behavior by showing that a rat would run a maze or cross an electrified grid floor in order to receive brain stimulation, indicating that brain stimulation is a reward.

After Olds left McGill, Milner did not continue with self-stimulation research. He left this to Olds, who had become immensely productive in the field. At the end of his career, a conference in honor of Peter M. Milner was held in which he contributed an article entitled “The Discovery of Self-Stimulation and Other Stories” (Milner, 1989). This article recounts Olds and Milner’s original discovery. After this discovery, Milner continued his theoretical work that attempted to find a learning theory congruent with the phenomenon of self-stimulation.


Figure 1. Diagram of apparatus by means of which a rat delivers electric shocks to its own brain. When the rat steps on the pedal, the electric circuit is closed and current is transmitted to its brain by means of implanted electrodes. (From Olds 1958a, used with permission of the AAAS).


Early Research Trends Following the Discovery

A highly selective review of some of the early postdiscovery experiments will next be described in order to point to developing research trends. The location of the anatomical sites yielding self-stimulation was one of the first questions asked by J. Olds (1956a, 1956b, 1958) and J. Olds, Travis, and Schwing, (1960). With his coworkers, he mapped locations in the rat brain where effects were either aversive or rewarding. Rewarding sites, locations where self- stimulation occurred, included many areas of the limbic system and diencephalon. A ventral system with a hub near the medial forebrain bundle yielded high rates of stimulation; a dorsal system with its head in the caudate-septal area, winding through the dorsal thalamus and tectum, yielded lower rates.

The relationship between self-stimulation and basic drives, such as feeding, were of interest to investigators soon after the discovery of this phenomenon (Hoebel and Teitelbaum, 1962; Margules and Olds 1962; J. Olds, 1958a, 1958b). Margules and Olds reported that, with electrodes in the lateral hypothalamic feeding area of the rat, whenever stimulus-bound feeding occurred, self-stimulation could be obtained from the same electrode and that food deprivation caused an increment in the self-stimulation rate. Hoebel and Teitelbaum placed an electrode-cannula combination in both the lateral and ventromedial hypothalamus of the rat. Self-stimulation of the lateral hypothalamus was inhibited by ventromedial stimulation or by excessive feeding. Both self-stimulation and feeding were increased (disinhibited) by ventromedial anesthetization or ablation. Food was said to act through the ventromedial hypothalamus to inhibit not only feeding but lateral hypothalamic self-stimulation. It was concluded that hypothalamic sites that control feeding control hypothalamic self-stimulation.

The influence of the temperature-regulating system on lateral hypothalamic selfstimulation was investigated by de Haan and Hamilton (1966). Rats with electrodes in the lateral hypothalamic area were exposed to temperatures from 7 degrees Celsius to 35 degree Celsius. Ambient temperature had an effect on the rate of self-stimulation although it was not as great as the effect on food intake. It was assumed that the effect might be attributed to effortful behavior at higher temperatures. Later work by Briese (1965), however, suggested that the reduction in bar-press rate could also have been due to a direct effect of the temperature-regulating system on the feeding system.

Mogenson and Stevenson (1966) investigated the relationship of drinking to self-stimulation. Drinking, as well as eating, can be elicited from lateral hypothalamic stimulation. Rats that exhibited stimulus-bound drinking behavior from lateral hypothalamic sites also exhibited high rates of brain stimulation when electrical train durations were less than half a second. When durations were longer, self-stimulation and drinking occurred concurrently. It was concluded that the results indicated the close relationship between the “reward” system and the neural systems underlying basic drives. Attempts to link self-stimulation with the biological substrates of the various drives continued for some time.

There were many other early experiments on a variety of topics. Some were on the effects of drugs, possibly a forerunner of the later work on effects of neurotransmitters, such as the catecholomines, on self-stimulation by Olds’ wife, Marianne, and others (M. E. Olds, 1968; Wise 1998). As noted by Thompson (1999, pp. 11-12), “special note must be made of

Jim’s wife Marianne, who collaborated with him on the pharmacological properties of the sites where brain stimulation was rewarding. She had received training in neurophysiology from T. Bullock at UCLA, and had been a postdoc with Edward Domino, a professor of pharmacology at the University of Michigan Medical School working on the function of the acetylcholine transmitter.” (p. 6). Some papers were concerned with the differences between self-stimulation and natural drives, including the length of time an animal will stimulate to the exclusion of all else, including food and water; the fact that some electrode placements exhibit both reward and punishment aspects; and the fact of rapid extinction, difficulty with some reinforcement schedules, and the next to impossibility of using brain stimulation as secondary reinforcer. Other early papers were concerned with the best locations for obtaining self-stimulation, such as in the vicinity of the medial forebrain bundle.

Much of this research has been covered by reviews and collections mentioned at the beginning of the paper. The purpose of the present paper was to examine the origins of self-stimulation, both distal and proximal. A few research efforts were described merely to get a glimpse of the beginnings of what had now become an enormous research effort. In the 1970s, the anatomical description of catecholamine pathways transformed the field into a kind of chemical neuroanatomy (Emson, 1983). For research summaries, consult the previously mentioned reviews and collections, especially those by Wise (1988), Hoebel (1988), Mogenson (1977), and Stellar and Stellar (1985).

Import of the Finding

As mentioned previously, this intriguing phenomenon has generated a large research literature. Quite early, certain trends were evident. Mapping of the brain for self-stimulation sites, particularly those near the medial forebrain bundle continued and led to correlation with neurotransmitter pathways. This was an outgrowth of pharmacological research that started soon after the field began. “This discovery of the brain reward system led to an explosion of research in the field, and for a period of years it was the most widely studied topic in physiological psychology. Other investigators attacked Olds’ basic notion of a reward system on every conceivable ground; a not uncommon phenomenon in science when a major discovery has been made. The best work in the field continued to be done by Olds and associates.” (Thompson, 1999, p. 8). Many young investigators and postdoctoral students, such as myself, were excited by the self-stimulation experiment and it certainly contributed to a resurgence of physiological psychology and collaboration between biologically oriented psychologists and neurobiologists. This collaboration expanded during the 1960s and the founding of the interdisciplinary Society for Neuroscience was certainly an outgrowth of this process.

Olds was widely recognized as a result of the discovery of self-stimulation and the work that followed it. This is eloquently summarized by Thompson (1999, p. 12):

Jim received a number of honors and awards in his career, beginning with the Newcomb Cleveland Prize from the American Association for the Advancement of Science in 1956. He was awarded the Hofheimer Award from the American Psychiatric Association in 1958; the Howard Crosby Warren Medal from the Society of Experimental Psychologists in 1962; and the Distinguished Scientist Award from the American Psychological Association in 1967. Jim was elected to the National Academy of Sciences at the young age of forty-five in 1967 and was elected president of Division 6 of the American Psychological Association in 1971. In my opinion Jim’s discoveries are of such fundamental importance that he merited a Nobel Prize.

Figure 2. Olds (left) receiving the Distinguished Scientist Award from the American Psychological Association in 1967. (Permission granted by Jim Olds, son of the recipient.)

Figure 2 shows Olds receiving the Distinguished Scientist Award from the American

Psychological Association in 1967.



Anand DK, Brobeck JR (1951): Hypothalamic control of food intake. Yale Journal of Biology and Medicine 24: 123-140.

Briese E (1965): Hyperthermia in self-stimulating rats. Acta Physiologico Latino Americana 15: 357-361.

Brobeck JR, Tepperman J, Long CNH (1943): Experimental hypothalamic hyperphagia in the albino rat. Yale Journal of Biology and Medicine 15: 831-853.

Cannon WB (1932): The Wisdom of the Body. New York, Norton.

de Haan HJ, Hamilton C (1966): Self-stimulation: The effect of temperature. American Psychologist 21: 647 (Abstract); Full text in Proceedings of APA 1966: 97-98.

Delgado JMR, Anand BK. (1953): Increase in food intake induced by electrical stimulation of the lateral hypothalamus. American Journal of Physiology 179: 587-593.

Delgado JMR, Roberts W, Miller NE (1954): Learning motivated by electrical stimulation of the brain. American Journal of Physiology 179: 587-593.

Emson PC (1983): Chemical Neuroanatomy. New York, Raven Press.

Hebb DO (1949): The Organization of Behavior and Neuropsychological Theory. New York, Wiley.

Hess WR (1954): Diencephalon — Autonomic and Extra-pyramidal Functions. New York, Grune & Stratton.

Hetherington AW, Ranson SW (1942): The relation of various hypothalamic lesions to adiposity in the rats. Journal of Comparative Neurology 76: 475-499.

Hoebel BG (1988): Neuroscience & motivation: Pathways and peptides that define motivational systems. In: Atkinson, RC, et al, eds., Handbook of Psychology (pp. 547-625). New York, Wiley,

2nd edition.

Hoebel BG, Novin D, eds. (1982): The Neural Basis of Feeding and Reward. Brunswick, ME, Haer Institute.

Hoebel BG, Teitelbaum P (1962): Hypothalamic control of feeding & self-stimulation. Science 135: 375-377.

Hull CL (1943): Principles of Behavior. New York, Appleton-Century-Croft.

Margules DL, Olds J (1962): Identical “feeding” and “rewarding” systems in the lateral hypothalamus of rats. Science 135: 374-375.

Milner PM (1989): The discovery of self-stimulation and other stories. In: White NM, Franklin KBJ, eds., The Neural Basis of Reward and Reinforcement. Neuroscience and Biobehavioral Reviews 13: 61-67.

Mogenson GF, ed. (1977): The Neurology of Behavior: An Introduction. Hillsdale, NJ, Erlbaum.

Mogenson GF, Stevenson JA (1966): Drinking & self-stimulation with electrical stimulation of the lateral hypothalamus. Physiology and Behavior 1: 251-254.

Morgan CT (1943): Physiological Psychology. New York, Appleton-Century-Croft.

Moruzzi G, Magoun HW (1949): Brain Stem Reticular Formation and Activation of the EEG. EEG and Clinical Neurophysiology 1: 455-473.

Olds J (1955): Physiological mechanisms of reward. In Jones MR, ed., Nebraska Symposium on Motivation. Lincoln, NE: University of Nebraska Press.

Olds J (1956a): Pleasure centers in the brain. Scientific American 195: 105-116.

Olds J (1956b): A preliminary mapping of electric reinforcing effects in the rat brain. Journal of Comparative and Physiological Psychology 49: 281-285.

Olds J (1958a): Self-stimulation of the brain: Its use to study the effects of hunger, sex and drugs. Science 127: 315-324.

Olds J (1958b): Effects of hunger and male sex hormone on self-stimulation of the brain. Journal of Comparative and Physiological Psychology 51: 320-324.

Olds J (1977): Drives and Reinforcements: Behavioral Studies of Hypothalamic Functions. New York, Raven Press.

Olds J, Milner P (1954): Positive reinforcement produced by electrical stimulation of the septal area and other regions of the rat brain. Journal of Comparative and Physiological Psychology 47: 419-427.

Olds J, Travis RP, Schwing RC (1960): Topographic organization of hypothalamic self-stimulation functions. Journal of Comparative and Physiological Psychology 52: 23-32.

Olds ME (1968): Facilitatory action of diazepam and chloridiazepoxide on hypothalamic reward behavior. Journal of Comparative and Physiological Psychology, 62: 136-140.

Rolls EJ (1975): The Brain and Reward. Oxford, Pergamon.

Skinner, BF (1938): The Behavior of Organisms. New York, Appleton-Century-Croft.

Stellar E (1954): The physiology of motivation. Psychological Review 61: 5-22.

Stellar JR, Stellar E (1985): The Neurobiology ofMotivation and Reward. New York, Springer-Verlag.

Thompson RF (1999): James Olds May 30, 1922-August 21 1976. In: Biographical Memoirs 77. Washington, DC, The National Academy Press.

Thompson RF, Robinson DN (1979): Physiological psychology. In: Hearst E, ed., The First Century of Experimental Psychology. Hillsdale, NJ, Erlbaum.

Valenstein ES, ed. (1973): Brain Stimulation and Motivation. Glenview, IL, Scott Foreman.

Wauquier A, Rolls E, eds. (1976): Brain Stimulation Reward. Amsterdam, Elsevier/North Holland.

White NM, Franklin KBJ eds. (1989): The neural basis of reward and reinforcement. Neuroscience and Behavioral Reviews 13: 59-186.

Wise RA (1998): Drug-activation of brain reward pathways. Drug Alcohol Dependence 51: 13-22.

The Discovery of Self-Stimulation and Other Stories


Department of Psychology, McGill University 1205 Dr. Penfield Avenue, Montreal, Quebec, Canada НЗА 1B1

MILNER, P. M. The discoverу of self-stimulation and other stories. NEUROSCI BIOBEHAV REV 13(2/3) 61-67, 1989.–The author’s recollections of the events leading to the discovery of rewarding brain stimulation at McGill University in 1953, with a history of his subsequent attempts to find a learning theory congruent with the phenomenon.

History Brain-stimulation reward Reinforcement Septal area Learning theory


THE events leading to the discovery that animals will stimulate their brains electrically were unfortunately not recorded at the time except by the fallible process of neural storage. My efforts to tap the memories of others who were present during the period of the discovery have not been very successful so the following account is a very personal one. I cannot hope for detailed accuracy but perhaps I can communicate the general ambience in the psychology department at McGill in the early 1950’s.

In 1948, Donald Hebb, who had arrived from the Yerkes Primate Laboratory the year before, became chairman of the department. At about the same time James Olds was starting graduate work in the department of Social Relations at Harvard. Intrigued by the manuscript of Hebb’s forthcoming book, The Organization of Behavior, I had just abandoned a career in engineering physics to take courses that I hoped would qualify me to do graduate work in psychology under Hebb’s direction; a hope that was realised the following year.

In 1949 Hebb’s book (3) was published and during the next few years was widely read, one of the readers being Olds, by that time well on the way to his PhD at Harvard. The Organization of Behavior was a book that appealed immediately to many of the then new generation of theoretical psychologists. It appeared at a time when there was disillusionment with Hull’s theory and psychologists were looking for less artificial and stilted formulations of behavior. Olds’ search for such a formulation had already attracted him to Tolman’s sign-gestalt theory (23). Although somewhat inchoate, the theory looked promising as a foundation that could be built upon, and this was Olds’ intention. His encounter with Hebb’s book then gave him the idea of trying to provide a neural realization of Tolman’s theory. Hebb had, after all, provided a neural model for the “idea” or concept, though he had been, if possible, even vaguer than Tolman in explaining how ideas could determine behavior.

This was a problem that Olds thought he could solve, but his solution entailed a distinctly idiosyncratic interpretation of Hebb’s neural model of a concept, the cell assembly. For example, he distinguished between “firing” and “reverberation” of cell assemblies, (corresponding respectively to perception and expectation) and he endowed them with attributes of negative and positive motive force. He also introduced “response units” that linked firing and reverberating cell assemblies and postulated that the motivation available for the response represented by such a unit depended on the difference between the motive force of the perception and that of the expected outcome. Thus, for example, when the net motive force of the firing assembly was negative (perception of a dangerous situation) and that of a reverberating assembly (e.g., for a safe place) was positive, the response unit would be strongly facilitated (17).

The theory is remarkable for its behavioral acumen and its neurological naivete (in terms of 1954 neurophysiology), which is not too surprising in view of the fact that Olds was almost perfectly ignorant of physiology when he first read The Organization of Behavior.

Anyone who knew Jim Olds will appreciate that he was not the sort of person to work out a theory and leave it at that. A theory for him meant an experimental program, and a neural theory obviously meant neurophysiological experiments. He was aware of his lack of preparation for such a program and as he also wished to try out his ideas about the cell assembly on the originator of that construct, he applied for a postdoctoral fellowship to work with Hebb at McGill and, the fellowship having been granted, he duly arrived in the fall of 1953.

The situation in the McGill psychology department at that time is rather hard for me to reconstruct, partly because I was in the final throes of my own doctoral research, a condition that promotes egocentricity and myopia, and partly because in those days graduate students were content to remain in ignorance of departmental affairs as long they posed no serious threats to their research or lifestyle.

Like Jim, I had been attracted to the department because I was seduced by Hebb’s approach to behavior theory, but I was having difficulty in reconciling Hebb’s neurology with what I was learning from seminars in the McGill physiology department and at the Montreal Neurological Institute. At the MNI the arousal system of the brainstem reticular formation was the subject of much discussion and excitement. Furthermore, although I found the cell assembly a most helpful construct for explaining perception and thinking. I found it less useful in dealing with problems of motivation, reinforcement, and the like, that were increasingly occupying my thoughts.

I also became aware in due course that the latter problems did not interest Hebb very much, or, at least, he was not interested in modifying or extending the treatment accorded them in his book. To the chagrin of at least some of his graduate students, he was preoccupied with (and no doubt enjoying) his new role as chairman of a rapidly growing department, and in the area of psychology the problems that interested him most were those related to the influence of “nature” and “nurture» in the development of intelligence, and the related problems of the effects of extreme environmental changes, such as sensory deprivation, on animals and man.

Another frustrated Hebbian theorist in the department at that time was Seth Sharpless, a philosophy graduate form the University of Chicago who came to McGill a year or two after I did with the intention of helping to establish a new Hebbian school of behaviorism. Although we would both no doubt have preferred to spend all our spare time telling Hebb how he should change his theory to make it really work, in practice we mostly had to be satisfied with arguing with each other.

Influenced by the work on the reticular formation of the brainstem being carried out by such people as Magoun (15) and Jasper (5). who was giving seminars on the subject at the Montreal Neurological Institute, Sharpless and 1 evolved a model in which conditioned firing of the arousal system constituted the motivation for correct responses. In a T-maze, for example, if a rat started to turn towards the goal, associations with previous encounters with the goal activated the arousal system and facilitated the response that was being made. We had trouble explaining why animals did not also turn towards a place where they had been punished, which should have fired the arousal system in a similar manner, but we were prepared to tackle that difficulty later if the basic idea looked promising.

We had actually reached the stage of doing a pilot study (which is to say that Seth talked me into doing a pilot study as I already had rats with tegmental electrodes for my thesis research). Three of these rats were stimulated, we hoped, in the arousal system immediately after making a choice in a T-maze. We expected that they would henceforth turn more frequently to the side on which they had been stimulated, but this did not happen; the rats unanimously avoided the stimulated side. Obviously the problem of punishment was not to be taken lightly. Ignoring it was not going to make it go away. Not wanting to get seriously sidetracked by an idea that was clearly defective in some way, I went back to measuring the effects of reticular stimulation on time estimation.

It was at about this time that Hebb brought an alert, friendly young man into my room one day and introduced him as a social psychologist from Harvard who was interested in learning about the brain and would I help him to get started. [I thought Hebb might be making a crack about Harvard’s Department of Social Relations but in fact Jim’s PhD thesis was in social psychology (16) and he also contributed to a book on the family (20).] When I discovered the extent of this new arrival’s knowledge of physiological psychology I was not very sanguine that he would achieve a successful metamorphosis.

He had not been around long before he gave me a copy of the manuscript of his synthesis of Tolman and Hebb that I mentioned earlier, and whilst I must have been deeply impressed by the behavioral aspects (they constituted the basis of my own speculations long after I had forgotten the source), I could see no future in physiological psychology for anyone capable of the reckless and unjustified assumptions about brain functions incorporated in his model. What changed my mind was that I gave him a series of rat-brain sections, a rat-brain atlas and an anatomy book and suggested that he start by learning some neuroanatomy. The first of a number of surprises came when I discovered at the end of the week he knew the rat brain as well, if not better, than I did.

My experiments involved electrical stimulation of the brainstem in freely moving rats, and over the previous three years I had managed to work out a fairly successful technique of electrode implantation based on that of Jose Delgado (1). I learned my surgery from Lamar Roberts, a neurosurgeon at the Montreal Neurological Institute, helping him to implant cortical electrodes in dogs. At the MNI, animal operations were conducted in essentially the same way as those on human patients, with long periods of scrubbing, sterile gowns, masks, gloves, and so on, in spite of which the wounds through which the wires emerged almost always became infected in our dogs and had to be chronically dressed with sulpha drugs.

With rats I used only semisterile operations but I did not dare relax the precautions very much as even rats frequently became infected along the electrode wires. The electrodes consisted of two or three insulated stainless-steel wires cemented together to form a needle that was stereotaxically implanted and fastened to the skull with dental cement. When the cement had set the projecting wires were disengaged from the stereotaxic holder and bent parallel to the skull, after which they were threaded through a hole in the skin of the neck, and attached to a plug. The scalp incision was then carefully closed with silk sutures.

These technical details are not entirely irrelevant to the story: sometime in the fall of 1953, not too long after he arrived, I introduced Jim Olds to this technique. He successfully implanted a few practice electrodes that I provided and then launched out on his own. For some reason he decided to make his electrodes of heavier gauge wire than I usually used. It is also possible that he did not wait long enough for the dental cement to set before attempting to bend the electrode, with the result that the tip of the electrode, that had been aimed at the tegmentum, was pushed forward into the septal area.

I am not sure why Jim chose to direct his electrode into the reticular formation. He once told me that Hebb had suggested it, but I suspect that Seth Sharpless may have had some influence (I don’t think he had been entirely convinced by the results of my pilot experiment on the reinforcing effect of stimulation there). On the other hand, it may just have been that the reticular formation was the fashionable structure to study in Montreal at that time. In any case, when Jim discovered a few days later that electrical stimulation could control his rat’s behavior in a totally unprecedented way there was great excitement and, of course, all our ideas about the reinforcing function of the reticular formation were revived.

As he had only reached the stage of practice operations, Olds had no formal experimental procedure worked out. What he did was to put the animal on a table top and observe the effect of electrical stimulation (derived, I believe, from an old audiofrequency generator). He noticed that this particular rat would advance, sniffing and searching, whenever the stimulation was turned on, and would stop or turn back when it was switched off. By giving a short burst of stimulation every time the rat turned in a particular direction it was possible to guide it to any part of the table. Shaping of a wide range of behaviors was found to be incredibly easy using this technique. Figure 1 shows a reenactment of the discovery photographed a few months later.

To the observer it was difficult to believe that the rat was not deliberately seeking brain stimulation, but Sharpless and I adopted a hyperskeptical position and put forward alternative explanations such as forced locomotion, or a “clever Hans» rat. We finally agreed that if the rat could learn to close a switch to stimulate itself with nobody around to give it cues, the phenomenon might be taken at its face value.



FIG. 1. A reenactment of the discovery of self-stimulation photographed sometime during the following year. The rat is not on the original table but in a square runway designed to measure the animal’s eagerness to reach stimulation. Jim Olds is holding the rat lead. Peter Milner is the interested observer at the back and Seth Sharpless is supervising from the left foreground.



The McGill psychology department owned no Skinner boxes at the time so one was hurriedly constructed. It bore little resemblance to those fitted with enormous treadles that Olds later built and which he erroneously described in his own account of that first experiment (18). The lever consisted of a long lucite rod, about ‘/■>» diameter, attached to a microswitch and projecting into a plywood box. The lever was quite high up and was manipulated rather awkwardly by the rat with its forepaws, but in spite of this the rat almost immediately learned what to do and stimulated itself quite persistently. We were all now convinced that Jim had a real phenomenon. Jim had a movie made in case it never happened again (I often wonder what happened to that movie, it would have some historical interest if it were still in existence) and Hebb invited a reporter from the now defunct Montreal Star, who regularly worked the McGill campus beat, to do a story on the discovery.

This latter development proved somewhat embarrassing. It turned out to be a quiet day for news and the reporter took advantage of this to persuade his editor to feature the story prominently. Thus a highly dramatic account of the discovery, with photographs, was splashed across the front page of the Montreal evening paper, from where it was picked up by the wire services and given considerable coverage worldwide. Jim and I had expected the usual paragraph buried in the middle of the paper and were so unhappy about the prepublication fanfare that we wrote a letter (which was published the next day) protesting the sensational handling of the story and toning things down a bit.

Once the phenomenon was clearly established I tried to replicate it. but after aiming several more electrodes at the same part of the tegmentum without success, I began to suspect that the electrode might, in fact, be somewhere else. There was no way Jim was going to sacrifice his rat to find out where the electrode had gone, but somebody had the brilliant idea of taking it down to the experimental surgery unit that occupied the floor below to have its head x-rayed. From the negative it was apparent that the electrode had been bent forward several millimeters from its intended position, probably to somewhere near the septal area or anterior hypothalamus. I do not believe we ever did get histological confirmation of the actual location. Once this fact had been discovered we started to aim the electrodes at the septal area and soon were able to reproduce the behavior fairly reliably.

At this point Jim’s exceptional organizing ability became manifest. The excitement generated by the discovery of selfstimulation has overshadowed a contribution of a different sort that impressed me greatly, being a classic case of the teacher taught. Jim realised from the first that he was going to need to test very many rats in order to map the effect throughout the brain and he was far from happy with the slow, demanding, and not very reliable methods of electrode construction and implantation that I had adopted. He had been reading Hess and thought that some adaptation of his method of stimulating the diencephalon (4) might be more convenient. To stimulate the brains of awake cats, Hess had screwed a plastic platform to the skull and used it to support electrodes. The scalp was left open and the skull exposed, but as the observations lasted for only a few days at the most, there was no time for a serious infection to develop.

Jim scaled down this technique for use with rats; he had a large number of plastic blocks made to his design, glued wires to them, and modified a crocodile-clip to make contact with the exposed ends of the wires. The blocks were screwed to the skull with jeweller’s screws and the wound was just left open. I do not know how long he expected to be able to use the animals, but with my Neurological Institute indoctrination, I myself certainly did not put it at more than a few days. I was sure that the animals would be hopelessly infected after a week or two and in any case, my experience with the dogs convinced me that the skull would reject the screws after a fairly short time.

To my astonishment none of this happened. The bare skulls almost always healed up beautifully, and the screws remained firmly fixed for months. The method that was designed to be quicker and easier turned out to be far superior in all respects to the dental cement and closed scalp technique, and of course virtually all the implantations that are performed today are based on this method.

In addition to the improved electrode Jim also designed a battery of Skinner boxes with wide treadles that eliminated the need for shaping and, because of their high intrinsic operant level, allowed the effect of aversive stimulation to be measured. In order to prevent accidental overstimulation, and consequent seizures, the current was turned off after about half a second by very noisy Agastat pneumatic time-delay relays originally designed for home furnace applications. The department had no budget for such refinements as response counters so we pressed into service an old polygraph. During the course of the day this produced several hundred feet of paper with a channel for each Skinner box. All that winter and spring the lab resounded with the clatter of Agastat relays, and Jim was to be seen, in the intervals between games of chess, crawling along the long corridor with streamers of recording paper and a ruler. It would have taken months, possibly years, to count individual responses so he measured the self-stimulation performance by using the length of time a rat was pressing at more than a criterion rate (19).

Of course, from the beginning, even before the phenomenon was firmly established, the implications were hotly discussed. Jim’s first thought was that he had a perfect refutation of the drive-reduction theory of reinforcement, which was dominant at that time. His argument was that stimulation increased the level of brain activity, and was at the same time reinforcing. Reinforcement could therefore hardly be due to the reduction of anything; it must involve an increase of positive affect, pleasure in fact. He often referred to the self-stimulation system as the “pleasure center,» a habit that I did not entirely approve of. Jim himself was behaviorist enough not to be misled by this sort of language, but I was worried that others might take it as an explanation rather than as a descriptive handle.

The possibility that the system might normally be activated by feeding, sexual behavior, and so on, occurred to us almost from the start and Jim’s first graduate student, Rolf Morrison, started to investigate the effect of hunger on self-stimulation and conversely the effect of self-stimulation on eating. Also, in order to satisfy critics who, never having seen the performance, thought it might be some trick of the motor system, Jim ran rats in mazes and alleys for brain-stimulation reward and demonstrated behavior quite similar to that for food reward.

After two years during which, with very little in the way of assistance of funds, Jim performed many experiments and worked out most of the techniques for future research into self-stimulation, he moved to Los Angeles where the greatly improved facilities allowed even higher productivity. He systematically measured the effects of different stimulation currents at sites throughout the brain, he compared brain stimulation with other forms of motivation in the obstruction box and he continued the research on the interaction between natural drives and brain stimulation. He also initiated drug studies and discovered that reserpine and chlorpro- mazine reduced the effectiveness of brain-stimulation reward. He was the first to explore the effects of amphetamine, barbiturate and lysergic acid.

I found this display of energy somewhat intimidating. It seemed to me that with my limited resources and limited organizational ability any contribution I could make to the neuroanatomical and neurochemical analysis of the motivational systems would be trivial by comparison. Besides, my ambition when I came to work with Hebb was to build on his theory of how the brain worked, and there was still a long way to go before all the new information about the central pathways of reward could be digested. Hebb had revitalized psychological theory with his suggestion that ideas are represented in the cortex by networks of synaptically linked neurons, cell assemblies, whose properties could perhaps be inferred from behavior and investigated physiologically. Unfortunately he developed these ideas when neurophysiology was in a state of rapid flux and I soon discovered from the courses and seminars I attended in the physiology and neurology departments that Hebb’s cell assembly needed to be brought up to date. In a paper called “The cell assembly: Mark II” (8) I pointed out that something was needed to distinguish between the learning that resulted in cell assemblies and the learning that associated one cell assembly with others. Otherwise, after a time, cortical neurons would all be linked into a massive cell assembly that would fire uselessly whenever any stimulus was presented. I introduced an inhibitory feedback process that limited the maximum number of cortical neurons that could fire at once, effectively preventing more than one cell assembly from firing at any time. Each active assembly «primed” associated assemblies; the priming lasted for several seconds so that a succession of cell assemblies could each contribute their priming and combine to influence which cell assembly would fire next. The cell assembly that accumulated the most priming when the currently firing assembly faded took advantage of the reduced inhibition to assume all the available activity and inhibit its less well-primed competitors. The idea that there might be long-lasting synaptic effects was, like Hebb’s postulate of synaptic learning, pure speculation at the time; the discovery of neuromodulators was still far into the future, but it seemed to me that some such effect was needed to explain behavior.

Possibly as a result of my background in electronics I have always been interested in the possibilities of inhibitory circuits and in 1950 when I was looking for a thesis project, one proposal I made was to look for lateral inhibition in the retina as a mechanism for improving acuity. It was turned down as being too technically difficult but Kuffler published something along similar lines a few years later (6). I did, however, write a paper (9) speculating on the role lateral inhibition might play in converting the coarsely tuned, overlapping, output of receptors into more sharply tuned and discriminable activity as it ascended to higher levels of the sensory pathways.

The cell assembly paper mentioned earlier contained, in addition to its revisions of Hebb, the seeds of a cognitive learning theory that was presented in more elaborate form a couple of years later. Helping myself liberally to ideas of Tolman (23), Olds (17), MacCorquodale and Meehl (7), Tinbergen (22), and of course Hebb (3), I concocted a behavior theory that fitted the selfstimulation data better than the stimulus-response theories then in vogue. It got its first airing in 1959 at a meeting in Pittsburgh (11), and another a couple of months later in Chicago (10). I have never succeeded in arousing much enthusiasm for the theory; at first I thought this was because it was too big a leap from the theories that most of my colleagues had been nurtured on, but then I found it had already been dismissed long before by none other than Thorndike (21). He called it the ideational theory and criticised as mentalistic (and, for good measure, because it was not supported by introspective evidence). Here is his admirable summary of it:

«This theory, which we may call for convenience the representative or ideational theory, would explain the learning of a cat who came to avoid the exit S at which it received a mild shock and to favor the exit F which led it to food, by the supposition that the tendency to approach and enter S calls to the cat’s mind some image or idea or hallucination of the painful shock, whereas the tendency to approach and enter F calls to its mind some representation of the food, and that these representations respectively check and favor these tendencies.»

Thorndike does not say where the theory originated; I suppose it may have been commonly held in pre-Watsonian days. It incorporates the idea of a response tendency that can have motivational associations that either snuff it out or amplify it into an overt response, depending on the sign, which is also the crux of my theory. To avoid the mentalistic language used by Thorndike and which was the source of his objection, it is only necessary to substitute cell-assembly activity, representing a stimulus or environmental event, for the “idea,» and neural representations of responses (motor cell assemblies) for “response tendencies.” The motor cell assemblies are assumed to be too weak to produce an overt response without additional facilitation from a motivation system.

What generates this motivational activity? Motivation level is high (or motor threshold is low) when the animal is in an unfamiliar place. Response tendencies then require no further motivation to become overt and we say the animal explores. If a reward is subsequently encountered it produces vigorous firing in the motivational system (presumably as part of the consummatory response) which produces an association between this system and the stimulus cell assembly and the cell assembly for the response that was firing at the time the reward was discovered. If instead of a reward the animal stumbles into a bad scene it is liable to “freeze.” The motor system is inhibited. In that case it is the motor inhibitory system that acquires connections with the stimulus and response cell-assemblies.

Later, if the animal finds itself in a similar situation, motor cell assemblies for a number of possible responses fire one after the other. If a motor cell assembly that previously led to punishment becomes active (tendency to approach Thorndike’s shocked door) the inhibitory system is fired, via the associations acquired previously, and the response system is inhibited. Alternatively, if a motor cell assembly for the rewarded response starts to fire (tendency to approach Thorndike’s food door) its association with the reward system facilitates responding and the motor activity is amplified until the response becomes overt. This is a highly simplified summary of the theory, it omits the complications needed to explain active-avoidance learning, for example, and ignores the more direct associations between stimuli and response tendencies that speed up responding in frequently encountered situations. I hope, nevertheless, that it conveys the essence of the theory. I still believe that the model provides a good account of a common type of learned behavior and furthermore, that it is a much better starting point for neural, or computer, modelling than the neo- stimulus-response theories some physiological psychologists still apparently use to interpret their findings.

In the 1959 version the sensory and motor cell assemblies were assumed to be in the cortex (explicitly including the paleocortex), the response system was assumed to extend from the motor cortex, through the striatum and subthalamic region to the medulla, and the crucial reward and punishment systems were localized in and around the hypothalamus. No justification was provided for these choices but I would not need to make many changes to bring the model into line with my present ideas about its neural substrate, though some of the pathways linking the structures were not known at the time.

In 1975, at the Beerse Conference on Brain-Stimulation Reward, I presented a basically similar expectancy model (13) complicated by a few refinements. In the earlier model, response tendencies were activated either at random or possibly by association with environmental stimuli. The type of drive present had no influence on the behavior until after a cell assembly for a satisfier of that drive was fired by its associations with the input and response tendency combination. Then it selected that response tendency for amplification. But drive states frequently play a much more immediate role in eliciting responses, as Deutsch (2) pointed out. In the later model, therefore, drive input acquired associations with the cell assemblies representing rewarding stimuli (i.e., stimuli present during the satisfaction of the drive), and when the drive was sufficiently intense it could fire those assemblies. These «reward” cell assemblies, in turn, via their reciprocal association with response cell assemblies, provided a pipeline from an active drive to responses that possessed a high probability of becoming overt in the presence of that drive.

Somewhat rashly, if I hoped to have anyone understand what I was talking about, in this version I also complicated the response mechanism to take account of the way proprioceptive input, and the location of the object being attended to, guided the response. This was no doubt because at about that time I was working on a theory of visual recognition (12) in which I tried to come to terms with the fact that an efficient scheme for recognition requires objects to be represented in the brain in a highly generalized way, with size, retinal localization and suchlike features stripped away. But in order to use the knowledge about the object some way must be found to indicate to the animal which of the many stimuli present on the retina corresponds to the object being recognized. My solution was to assume that the active cell assembly, the representation of the object being attended to, could feed its activity back to the source of its input, where the signal still retained positional information. The enhanced activity at that level could then be used to guide responses directed to the object.

The postulated anatomical substrate for the Beerse model was somewhat more specific than for the earlier versions. The response generating system was assumed to be inhibited via the striatum by activity in the neocortex and hippocampus. Disinhibition of the response generator took place when the striatum was inhibited, which was assumed to occur when reward stimuli reached it via the ascending catecholamine system. To be more specific, I thought signals from food, for example, fired hypothalamic neurons that had been simultaneously facilitated by internal signals denoting hunger, and the resulting activity was delivered to tegmental nuclei where the catecholamine fibers to the striatum and cortex originate. Catecholamine transmitters are usually inhibitory so when the tegmental nuclei are stimulated the output of the striatum is reduced and the response generator is released from inhibition.

The additional important step is that any cortical input to the striatum during the time that the latter was being inhibited by reward activity acquired inhibitory, rather than excitatory connections to that structure. Thus any conceptual (cell assembly) firing associated with reward would disinhibit rather than inhibit the response generator, as required by the theory.

A very similar, but more detailed, version of the theory was published a couple of years later (14). As before the “action engram» consists of three strongly linked cell assemblies, namely those for 1) an incoming stimulus, 2) a response tendency and 3) the most frequently encountered outcome of making that response in the presence of the incoming stimulus. The total collection of such triplets constitutes the animal’s ‘‘cognitive map.” If an «outcome” cell assembly is activated by drive input (e.g., hunger), all the stimulus and response members that have that outcome as the third member of their triplets are facilitated via the associative links, and if one of the stimulus assemblies is also receiving sensory input, its associated response element will be activated. Then the outcome cell assembly, which, as explained above, has previously acquired inhibitory connections to the striatum, is fired even more vigorously by the combined effects of drive input and associations from the stimulus and response elements of its triplet, and disinhibits the response mechanism, allowing the response to become overt. If the animal finds itself in a situation where it does not known what the outcome of making a response will be (i.e.. no outcome is associated with one or more of its response tendencies) the output from the cortex to the striatum is reduced, hence the response system is less inhibited. Under these conditions response tendencies readily become overt and the animal explores. Only when the consequence of a response can be predicted is the excitatory input to the striatum strong enough to inhibit the response system, and even then, of course, excitatory input may be blocked if the predicted outcome is related to a reward.

In the 1977 paper, the disinhibiting effects of rewarding stimuli on the motor system were attributed more specifically to the nigrostriatal and mesolimbic dopamine pathways. The conditioned disinhibition of responses that occurs whenever a cell assembly associated with reward is active, was attributed, as before, to the establishment of inhibitory connections to the striatum from cell assemblies that are active during increased dopamine release from nigrostriatal terminals.

This model also included mechanisms to account for escape and avoidance learning. Passive avoidance is the mirror image of approach. A path carrying aversive signals, probably that running from the centre median nucleus of the thalamus to the striatum, delivers strong Excitation to inhibit the central («voluntary”) motor system, leaving the way clear for reflex escape mechanisms located in the lower brainstem and spinal cord to assume control. Conditioning of this aversive input to the cortical activity that accompanies it ensures that whenever the outcome of a response is expected to be aversive the striatum is strongly activated and that response is suppressed. Active avoidance is more difficult to explain, but it probably involves inhibition of striatal activity by outcomes of responses that have previously terminated unpleasant stimulation.

The model can explain several of the peculiarities of brain stimulation reward. Stimulation of the medial forebrain bundle reward path is assumed to reproduce the effect of a conventional reward, except that it bypasses the gating or modulating effect of drive, which in many cases appears to involve hypothalamic neurons. Thus self-stimulation will take place at low drive levels, though if the electrode is close to cell bodies whose thresholds are reduced by drive input, self-stimulation may be obtained at lower currents when a strong drive is present, an effect frequently observed.

It was apparent from the first self-stimulation experiments that even experienced animals often failed to show much interest in obtaining brain stimulation at the beginning of a session and had to be “primed” with a few bursts of stimulation before they would self-stimulate. This is different from the way animals behave in a feeding situation after having been deprived of food for some hours. A feature of the model is that conventional drive inputs acquire associations with cell assemblies for their satisfiers and in that way “prime” a number of stimulus-response-tendency combinations that have the satisfier as their outcome. Because brain stimulation can operate under very low drive conditions this mechanism for priming effective goal seeking at the beginning of a session is not available. Furthermore, with natural rewards there are well-established cell assemblies for satisfying outcomes, such as food, which can readily acquire connections with other cell assemblies and with drive inputs. The brain stimulation that mimics a satisfier by inhibiting the striatum has no such well- established cell assembly and it is probably accompanied by highly abnormal cortical activity. Thus, even if a drive were present during self-stimulation it would not form very strong associations with the unusual “satisfier» cortical activity, and, more importantly, the stimulus and response tendency cell assemblies cannot acquire very effective connections with the satisfier outcome either. It is as if the password for turning on the motor system were in some weird code; the chances of recalling it correctly after a long interval are not good. Presenting a few “priming” bursts, however, is equivalent to refreshing the memory of the password; the links between neurons of the pattern are strengthened by various short-term synaptic processes, temporarily converting it into the equivalent of a well-learned cell assembly. When it is in this condition, associations from the cell assembly of a previously rewarded response can produce more vigorous activity in it, resulting in the delivery of more inhibition to the striatum.

Of course these models are only intended to provide a physi- logical framework for one type of learning, of which maze learning is typical. They are not of much help in understanding simple conditioning (if there is such a thing), or learning to swim or ride a bicycle. I do believe, however, that if physiologists whose research is concerned with the effects of reward, addiction, and so on, would attempt to view their results from the point of view of an expectancy theory of the sort I have outlined, they would find it more useful than fitting their data to the ubiquitous stimulus-response, or neo-Pavlovian models of learning.



  1. Delgado, J. M. R. Permanent implantation of multilead electrodes in the brain. Yale J. Biol. Med. 24:351-358; 1952.
  2. Deutsch, J. A. The structural basis of behavior. Chicago, IL: University of Chicago Press; 1960.
  3. Hebb, D. O. The organization of behavior. New York: Wiley; 1949.
  4. Hess, W. R.; Briigger, M. Das subkortikale Zentrum der affektiven Abwehrreaktion. Helv. Physiol. Pharmacol. Acta 1:33-52; 1943.
  5. Jasper, H.; Hunter, J.; Knighton, R. Experimental studies of thalamocortical systems. Trans. Am. Neurol. Assoc. 73:210-212; 1948.
  6. Kuffler, S. W. Discharge patterns and functional organization of mammalian retina. J. Neurophysiol. 16:37–68; 1953.
  7. MacCorquodale, K.; Meehl, P. E. Edward C. Tolman. In: Estes, W. K.; Koch, S.; MacCorquodale, K.; Meehl, P. E.; Mueller, C. G., Jr.; Schoenfeld, W. N.; Verplanck, W. S.; Poffenberger, A. T., eds.

Modem learning theory. New York: Appleton Century Crofts; 1954: 177-266.

  1. Milner, P. M. The cell assembly: Mk. П. Psychol. Rev. 64:242-252; 1957.
  2. Milner, P. M. Sensory transmission mechanisms. Can. J. Psychol. 12:149-158; 1958.
  3. Milner, P. M. Learning in neural systems. In: Yovits, M. C.; Cameron, S., eds. Self-organizing systems. New York: Pergamon Press; 1960:190-204.
  4. Milner, P. M. The application of physiology to learning theory. In: Patton, R. A., ed. Current trends in psychological theory. Pittsburgh, PA: University of Pittsburgh Press; 1961:111-133.
  5. Milner, P. M. A model for visual shape recognition. Psychol. Rev. 81:521-535; 1974.
  6. Milner, P. M. Models of motivation and reinforcement. In: Wauquier, A.; Rolls, E. T., eds. Brain-stimulation reward. Amsterdam: North- Holland Publishing Co.; 1976:543-556.
  7. Milner, P. M. Theories of reinforcement, drive, and motivation. In: Iversen, L. L.; Iversen, S. D.; Snyder, S. H., eds. Handbook of psychopharmacology, vol. 7. Principles of behavioral pharmacology. New York: Plenum Press; 1977:181-200.
  8. Moruzzi, G.; Magoun, H. W. Brain stem reticular formation and activation of the EEG. EEG Clin. Neurophysiol. 1:455–473; 1949.
  9. Olds, J. The influence of practice on the strength of secondary approach drives. J. Exp. Psychol. 46:232-236; 1953.
  10. Olds, J. A neural model for sign-gestalt theory. Psychol. Rev. 61:59-72; 1954.
  11. Olds, J. Commentary: The discovery of reward systems in the brain. In: Valenstein, E. S., ed. Brain stimulation and motivation. Glenview, IL: Scott, Foresman & Co.; 1973:81-99.
  12. Olds, J.; Milner, P. M. Positive reinforcement produced by electrical stimulation of septal area and other regions of rat brain. J. Comp. Physiol. Psychol. 47:419-427; 1954.
  13. Parsons, T.; Bales, R. F.; Olds, J. Family, socialization and interaction process. Glencoe, IL: Glencoe Free Press; 1955.
  14. Thorndike, E. L. Human learning. New York: Century; 1931.
  15. Tinbergen, N. The study of instinct. Oxford: Clarendon Press; 1951.
  16. Tolman, E. C. Purposive behavior in animals and men. New York: Century; 1932.

Pleasure Centers in the Brain

Why aren’t we just as fit to survive if we’re nudged ahead by pains and deficits?

Psychology has gone through a period of fantastic growth during the last 30 years, primarily because it seemed to offer college students some explanation of their malaise and some understanding of themselves which would free them from anxiety. Then why, if we are interested in the weal of these college students, do we expend our research on albino rats? Facetious though it may appear, there must be some truth in the answer that the albino rat is an excellent model of the contemporary American college student, or at least the majority of them. Neither the student, having been bred in America, nor the white rate, having been bred in a laboratory, has ever experienced a need in its life.

The question of what might drive behavior in the absence of needs would have bothered theoretical psychologists and Puritan preachers alike at the time I entered the field in the late 1940’s. The concept of need- driven behavior is simple, compelling, and in complete accord with the simplest concept of evolution. It says that damage to the organism, or deprivation inimical to health, causes physiological processes which are experienced as discomfort, and that behavior proceeds in a random or a guided fashion until discomfort is alleviated. An uncomfortable person is in need, and need justifies a multitude of sins.

A few short steps ahead of this conception–and representing no significant advance in sophistication–is the drive-reduction theory of learning. The conception of this theory is that somewhere in the brain incoming sensory messages cross outgoing motor messages. If a sudden drive reduction occurs, a connection becomes fixed more or less permanently, so that the next time the sensory message will cause the rewarded behavior. It is possible to attack this theory on a variety of grounds, one of which is the simplicity of its attitude toward the data processing that goes on inside the brain. This theory gives rise to a law that states, “Learning occurs only when discomfort is relieved.”

For an organism that seeks novelty, ideas, excitement, and good-tasting foods, the drive-reduction theory was a Procrustean bed. Whatever did not fit was shorn from our image of the man and the rat. Drugs, good foods, and sex were thought of in terms of a need–that is, a hurt generated by withdrawal. Even the coddled white rat was trapped into his forward motion by his residual pains. Behavior was a downhill course toward quiescence, and its energetics were a series of accidents from outside which countered the downhill trend.

If behavior was not aimed to repair these damages and concurrent discomforts, then why was it selected and why did it survive? This rhetorical question was given in answer to all counterarguments.


Neither the student, having been bred in America, nor the white rat, having been bred in a laboratory, has ever experienced a need in its life. What, then, might drive behavior in the absence of needs?


It is interesting that research on the albino rat, if it has not gone far to improve the sophistication of the psychologist, has at least caused a minor revolution with regard to this drive-reduction theory of reward. This has been through the discovery that animals work not only to turn off discomforting stimuli but to turn on brain stimulations in an extensive set of brain regions. These regions are now considered by many to be central counterparts of the positive factors and good things of life that turn on people in everyday life.

The method of drilling holes in the skull and lowering probes by means of a guidance system to stimulate very small and well-localized brain centers was developed gradually in the first half of this century. Professor William R. Hess, still alive in Zurich, Switzerland, developed a method of fixing a plaque to the skull from which a probe penetrated deep into the brain and to which a wire from an electric stimulator might be attached. Animals were then permitted to recover from the operation; the small scalp wound healed completely around the plug.

The probes into the brain were metal wires that were insulated except for the very tip, and so the point of electric stimulation was relatively small. The long, loose wire suspended from above permitted about as much movement to the animal as is permitted to a dog on a long leash. Electric stimuli could then be applied during the free behavior of the animal to see how localized electric currents might influence ongoing behaviors, and to see what responses might be evoked by stimulating locally at different brain points.

With these methods Hess discovered, in the cat, places where the basic energy-mobilizing responses of the heart, the lungs, and the preparatory musculature could be controlled. There was a large region where stimulation caused the animal to become prepared for fighting or fleeing, by an increase in heart rate, blood pressure, the rate of breathing, the amount of muscle tone, and so forth; and he found another large adjacent area where electric stimulation caused the opposite of all these actions so that the animal either became prepared for sleep or engaged in one of a number of restorative bodily processes.

The area where brain stimulation caused excitement and preparation for violent activity included parts of the posterior hypothalamus and the adjacent area of midbrain. The area where electric stimulation caused quieter bodily processes of rest and repair included the anterior hypothalamus and related sectors near the cortex.

At about the time I entered the brain-stimulation field, Neal Miller, my famous colleague who is now professor at the Rockefeller University, was the world’s chief proponent of the drive-reduction theory of reward, a theory which he still occasionally professes. He was not only a proponent of this theoretical view, but he was embarked with Jose Delgado of Yale upon an enterprise that would bring the drive-reduction theory into close relation with the work of Hess. The outcome of these studies was to show that stimulation of the posterior hypothalamus and anterior midbrain also caused a psychologically valid aversive condition so that the animal responded as if it were put into genuine discomfort by the electric brain stimulation and as though it afterward became afraid of those places where the brain stimulus had been applied. At a later date Miller and his colleagues were able to show that electric stimulation within a center –which had for other reasons come to be called the “feeding center”–of the hypothalamus caused not merely the behavioral responses of feeding but also a psychologically valid drive, because the animal would not only eat if food were available, but by stimulation would be caused to work for food when food was absent.



The theory of drive reduction implies that learning occurs only when discomfort is relieved. But for an organism that seeks novelty, ideas, excitement, and good.foods, this theory has its limitations.


It was a title of a Neal Miller talk which in 1953 first caused me to believe that brain stimulation caused not only this discomfort motivation which was so palatable to the drive-reduction theorists, but also perhaps some positive or hedonic motivation which would be the antithesis of their view. Neal Miller used the title “The Motivation of Behavior–” or perhaps he said “The Reinforcement of Behavior–Caused by Direct Electric Stimulation of the Brain.” On first reading the title I thought for a brief moment that he would reward the animals by turning on the electric brain stimulus. When I read his abstract carefully and later heard the talk and saw his movies, I realized that he was causing an aversive reaction with his electric shock to the brain, an aversive reaction which it seemed to me might be caused even more easily if he would apply his electric shocks in any part of the nervous system, even including the forepaws or the hindpaws or the surface of the skin.

Very shortly after my misreading Neal Miller’s title, through a variety of fortuitous circumstances, I was sitting at a table on which there was a large enclosure about 3 feet square with sides 10 or 12 inches high. It contained an albino rat, in which a probe was implanted to stimulate in one of the regions in or near the hypothalamus. A wire suspended from the ceiling connected the animal to an electric stimulator which I controlled by means of a pushbutton hand switch.

For reasons which in retrospect sound foolishly complex or ridiculously random (depending on your point of view),

I had decided to stimulate the rat each time it entered one of the corners. It entered a first time, and I applied a stimulation which lasted approximately Vi second; the’ animal made a sortie from the corner, circled nearby, and came back. I stimulated a second time, not more than a minute or so after the first time. The animal made a second brief sortie, but came back even sooner. I stimulated a third time, and the animal stayed with an excited and happy look. (You may wonder how I know, but I have “gone among them and learned their language.”)

The animal kept staying and I kept stimulating, for I was already convinced that the animal had come back for more.

In successive experiments with the same rat, first it was caused to go to any corner of the enclosure selected by an independent observer, provided only that I would apply the brain stimulus immediately after the animal took a step in the right direction. Still later it learned to run to the pre-chosen arm of a Т-maze in order to get to a terminal point where electric brain stimulation was applied; the animal eliminated errors and ran faster from trial to trial. Before I was done with this, my first animal, I was convinced that his behavior was directed not to mitigate aversive conditions but rather to instigate a positive excitation. The question, however, had to be asked whether this was an accidental observation or a significant feature of brain and behavior so that it might be taken as exhibiting a fundamental law about the direction of some behavior toward, rather than away from, the excitements of the environment.

Together with Peter Milner, who was at that time my instructor in brain-stimulation methodology, I endeavored to repeat the observation in another animal. This did not at first happen with ease. Some animals with probes directed at or near the original point seemed to favor the stimulus, but others seemed to respond as if it were negative rather than positive. It soon became apparent that careful mapping of the brain would be required to zero in on the critical areas and create a situation where animals could be prepared so that the basic characteristics of the phenomenon might be studied with a variety of methods and be understood.

For this purpose we used a Skinner box in which the animal could stimulate its own brain by depressing a lever. A Skinner box (named after Harvard’s famous behaviorist, B. F. Skinner) is nothing but a small enclosure with a single manipulable device such as a lever, arranged in such a fashion that the animal, by manipulating the device, causes itself to be presented with a reward. The rewardingness of the reward is then measured by the rate of the lever response. For measuring the reward properties of the electric brain stimulations in different centers, Skinner’s method was ideal.


Rewarding effects of brain stimulation are neither accidental nor confined to small, ofbscure brain centers. Furthermore, the parts of the brain where the best positive effects are achieved are clearly separated . topographically from those points of the best aversive effects.


We used a very small box and a very large lever, so that the random rate of pedal pressing was very high during the initial period. If the rate rose rapidly, so that the animal was eventually responding at rates of about one pedal-press per second, it seemed that there were quite clearly rewarding effects of the brain stimulation; if after the first one or two self-administered stimulations the animal stayed away from the lever, these zero rates could be taken as evidence of aversive effects of the electric brain stimulation. With this arrangement it was quite easy to map the phenomenon, and this has provided a basis for an easy reproduction of the rewarding brain stimulation, not only in a large number of experiments which have been performed in my laboratories at UCLA and the University of Michigan, but also in a large number of laboratories throughout the United States and the rest of the world.

The self-stimulation experiments quickly resolved the most basic question: The rewarding effects of brain stimulation were neither accidental nor confined to small, obscure brain centers. One-quarter to one-third of the points tested yielded self-stimulation behavior to the degree that animals stimulated their brains at very high rates, ranging from one pedal-response every ten seconds in places where the effect was mild, to more than two pedal-responses every second in areas where the positive effect was very intense. Points where brain stimulation had a dearly aversive effect were far less numerous in the rat than were those with a positive effect. Only about one brain point out of every 12 tested caused a rate which was clearly depressed. Furthermore, the parts of the brain where the best positive effects were achieved were clearly separated topographically from those points of the best aversive effects.


«Let’s exchange pushbuttons”–a good joke, but not likely to happen.


The “rewarding” parts of the brain were all related to olfactory mechanisms and to chemical sensors. Among these were many areas where the brain itself seems to act as a detector of sex hormones and hunger factors carried in the blood. Mapping in other animals showed that the same parts of the brain were involved in rat, rabbit, cat, dog, monkey, and man. The experiments have also been conducted successfully in birds and fish, but the brains in these cases are sufficiently different from the brains of the mammals so that I would not want to say whether or not the same parts of the brain were involved in these cases.

The investigations of human patients with implanted electrodes have been carried out in the course of three different kinds of therapeutic procedures: those related to the severe mental psychotic ailments; those related to the cure by means of very small brain lesions of severe intractable pain, and by means of similar lesions of , Parkinson’s palsy; and finally there have been those involved in providing temporary relief for cancer victims who had previously been maintained on morphine. Reports of experience from human patients have often been confused, but they have been repeatedly positive; patients have stimulated themselves and have been maintained in far better and happier condition with less deterioration than was ever achieved with drug therapy.

Lest the younger of you fear for yourselves, and the older of you fear for your children, I do not foresee even in these times anyone so avant-garde that he will readily tolerate having his head drilled while having his ears pierced. Probes in the brain over long periods of time create scar tissues, and scar tissues become epileptic foci; the method will never be used except in cases where therapy is acutely required.

One of my friends who was rewriting Aldous Huxley’s Brave New World and combining it with his own version of Orwell’s 1984 brought his novel to an unlikely end by having his two main characters (still a girl and boy, although for some reason the difference was both less conspicuous and less important) both implanted with wires which come from under their long hair and into their pocket stimulators. He shyly suggests, “Let’s exchange pushbuttons.” I am of the view that it’s a good joke, but it’s not a danger.

Back to the rats. The very rapid and intense pedal- response rates were not as immediately convincing to my colleagues as they were to me. People asked whether the brain stimulation might be simply arousing and exciting so that the large animal in the small box would be something like a bull in a china shop and that with such a big lever every behavior would be a pedal-press behavior. Other people suggested that even if there were some disposition on the part of the rat to come back for more stimulation, this might be something induced by the previous stimulus, so the animal, having an aversive aftereffect–something like an itching caused by the first stimulus–would come back and alleviate it by pressing a second time, and a third, and so forth, much in the way one scratches a mosquito bite.


Is a rat in a small Skinner box equipped with a large pedal like a bull in a china shop, where any kind of stimulation results in pedal-press behavior?


To answer these questions–which suggested that perhaps the positive observations were only a sham and not the true substance of a positive reward–we ran a series of behavioral tests. In a maze, animals were trained to run from Start to Goal, where they received only brain shock for reward. Hungry rats ran faster for the brain shock than they did for food. They eliminated errors from trial to trial, thereby indicating that this was no bull-in-a-china-shop phenomenon. They ran purposefully without errors when first tested in the morning, 24 hours after the last previous brain shock, thereby disproving the argument that some aversive consequence of a preceding brain stimulus caused the animal to seek more.

At this point people began to concede that perhaps there was a set of mechanisms in the brain concerned with positive drives which competed with or were an adjunct to the control of behavior by negative needs and aversive mechanisms. But the question needed to be asked whether or not this was some junior partner to be in charge of entertainment and cultural enlightenment after the needs were all cared for, or whether this might be a basic force in behavior, a full competitor with pain and the basic needs.

The first experiment to answer this question showed that animals would cross a grid that administered painful shocks to their feet in order to get to a pedal where they could stimulate their brains. Animals took four times as much electric footshock when they were pursuing the brain reward as a normal hungry rat is willing to tolerate when it is in pursuit of food.

In another experiment rats, to all intents and purposes, gave up food to the detriment of health and underwent the danger of starvation in order to stimulate their brains. In this experiment, animals in a food pedal box were permitted 45 minutes daily (just time enough to get a meal that would maintain them in a healthy condition). When they were offered in this box the alternative of electric brain stimulation, they quickly renounced food almost altogether, and would have died of starvation but for the benign intervention of the experimenter. Other experiments have showed that rats would press one pedal as much as 100 times only for the sake of getting access to a second pedal with which they could stimulate their brains.

And in another set of experiments it has been found that one experience of this positively reinforcing brain stimulation can last for a very long time, having consequences for two or three days. Even a period of two seconds of brain stimulation has modified the animals’ behavior for as long as seven days in the experiments of Carol Kornblith in our own laboratory. In her study, animals were stimulated briefly in the least preferred part of a large enclosure. Often this caused the least preferred place to become the most preferred place, or at least it modified greatly the amount of time which the animal spent in that part of the experimental chamber, and the change lasted for a very long time.

If we grant the very strong influence exerted by these positive brain stimulations on behavior, the problem arises of how these forces interact with negative, aversive influences, and what is their relation to basic drives such as hunger, thirst, and sex.

The problem of interaction between positive and negative mechanisms was first brought to the forefront by experiments of Professor Warren Roberts, now at the University of Minnesota, indicating that there were parts of the brain where electric stimulation simultaneously and paradoxically caused both rewarding and punishing effects. Either the animal would first work to turn the stimulation on, and once it was on, work rapidly to terminate it; or the animal would, if forced to remain in a place where stimulation was available, pedal- press very rapidly as if seeking to obtain it, but rapidly escape from the box if any escape could be found.

In the latter case it appeared that an ambivalent and ambiguous stimulation was being applied, a simultaneous stimulus of opposing neurons which could not normally be activated at the same time.

In mapping the brain areas that yielded pure positive and negative reinforcement, and those that yielded this mixed phenomenon, we found that input pathways to some of the brain nuclei would yield pure reinforcement of one sign–either positive or negative–and output pathways from these same areas would yield just the opposite effect; stimulation of the nuclear masses themselves would yield mixed positive-negative behavior. This suggested inhibitory relations within the nuclear masses between positive and negative neuronal mechanisms. In one test we found an inhibitory chain where stimulation which was rewarding in a first area inhibited behavior related to a second area whose stimulation was aversive; stimulation in the second area which was aversive inhibited behavior related to a third area where stimulation was rewarding. One was plus, two was minus, and three was plus again. And one inhibited two, and two inhibited three. Stimulation of the third area rewarded the animal directly, and also augmented rewarding behavior when it was induced by stimulation of other parts of the brain. Much to our surprise, it also augmented punishment behavior induced by stimulating aversive points or when that behavior was induced by more normal means.


Rats will press one pedal as many as 100 times to get access to a second pedal with which to stimulate their brains–demonstrating that they will work for the reward of brain stimulation.



Hungry rats, offered the alternatives of food and positively reinforcing brain stimulation, quickly renounce food almost altogether.


Several stimulations applied to a pleasure center in an animal prostrated by alcohol will restore muscle tone and awareness, and the animal will then continue to self-stimulate his brain for as long as current is available.

When the current is cut off, the animal sinks back into his stupor.


By these and further experiments we were led to the conclusion that mechanisms of positive and negative emotion interact with one another inhibitorially in the brain, in such a fashion that a predominance of one could inhibit the other, and vice versa. Furthermore, we were led to the conclusion that they might be acting through an area like “3.” If 3 activity were augmented by rewarding stimuli and depressed by aversive stimuli, then 3 might derive an algebraic sum of rewards and punishments so that the animal would have a unitary state, somewhere between very good and very bad; and this would modify future behavior probabilities. This model generated interesting experiments which gave it some support, and it is still a viable theory held by me and some of my colleagues. But as with many good theories in the behavioral sciences, it is still in a state of limited probability.

The overlap of areas yielding positive or rewarding effects with areas where electric stimulation caused aversive reactions led us to wonder whether different drugs and different neuronal messenger chemicals might be involved in activating the two different kinds of neurons.

As a first step toward testing for such differences, experiments were performed in which many different drugs were tested for the influence on self-stimulation behavior. The most interesting outcome of these tests was that use of a family of popular and intoxicating drugs repeatedly • increased self-stimulation over escape behavior. Either these drugs didn’t affect self-stimulation while abolishing escape behavior, or some of these drugs actually augmented self-stimulation behavior. We found that an animal which has been prostrated by a large dose of alcohol will lie flaccidly without muscle tone and yield no response when we apply aversive stimulation. Surprisingly, several stimulations applied through a self-stimulation electrode will restore muscle tone, and the animal will arise and self-stimulate for as long as the current is available. If the current is then turned off by the experimenter, the animal will quickly sink back into stupor and flaccidity.

Pentabarbitol, the favorite of sleeping pill enthusiasts, has effects remarkably like those of alcohol. Amphetamine, which activates many behaviors, also activates self-stimulation, and there are tests which strongly suggest that amphetamine has a particular relation to selfstimulation behavior. The relatively popular mild tranquilizers–Miltown and Librium–both also favor selfstimulation behavior over escape behavior; Librium and a family of drugs like it cause remarkable accelerations in self-stimulation behavior, even though this drug has a generally quieting effect on the animal.

It was surprising at first, but I suppose it should not have been, that the main drugs which are currently used to control agitation in the major psychoses–namely, chlorpromazine and reserpine–both have a highly selective effect against self-stimulation behavior. These drugs permit escape behaviors to continue in doses which totally abolish the rewarding effects of brain stimulation, or at least the resulting behaviors.

Studies at a more fundamental level have been directly concerned with those chemicals which carry messages from nerve to nerve. The primary messenger, so far as current knowledge and speculation is concerned, is acetylcholine, abbreviated ACH. The secondary messenger, again so far as current evidence and speculation is concerned, is noradrenalin, abbreviated NA. Drugs applied to the rat for augmenting ACH in the brain generally decreased self-stimulation; this led to the speculation that ACH might be more important as a messenger in the negative or aversive systems. Drugs applied to augment NA in the brain regularly increased self-stimulation, so this secondary transmitter might be more important in the positive or rewarding apparatus.


Noradrenalin (NA) is believed to be the secondary messenger in carrying information from nerve to nerve. When drugs which raise the level of NA in the brain are given to rats, self-stimulation is increased. However, when NA-like drugs are applied directly to the brain, the opposite effect occurs. Obviously, the chemical basis for positive and aversive responses is not simple.


Clear evidence that the problem would not be all that simple came from studies which showed a difference between direct and indirect augmentation of NA in the critical centers. Drugs which were applied peripherally to raise brain NA regularly increased self-stimulation. However, NA and NA-like drugs, when they were applied directly in the critical brain centers, often decreased or counteracted behavioral excitations which were caused by stimulating these centers. Furthermore, many recent studies have suggested the possibility that NA might be mainly an inhibitory chemical involved in counteracting rather than instigating neuronal activity.

Many of the drive-reduction theorists would be quick to jump to the suggestion that reward therefore might be mainly an inhibitory neuronal process, a process whereby one system of neurons utilizing norepinephrine would inhibit another set of neurons whose influence would be mainly energizing and perhaps even aversive. While this possibility is not totally unreasonable, I feel that our current knowledge of NA effects in the brain is advancing so rapidly that we must suspend judgment in this area. Progress is being spurred not only by our researches which connect NA to reward, but also by recent advances in many laboratories suggesting that NA and its close relative, serotonin, are very importantly involved in the control of sexual behavior and aggression; it appears possible that both sex and aggression are augmented by drugs which selectively depress levels of serotonin without simultaneously depressing levels of NA.

radically different kinds of motivation, which first appeared in approach-escape tests, was further exhibited when the feeding centers in rats studied by Neal Miller and Jose Delgado were eventually studied with a view to understanding whether their stimulation would yield rewarding or possibly aversive effects. Before the rewarding tests were made, two important centers related to feeding were already known. These were roughly outlined topographic entities in the brain where the probability of affecting feeding behavior by destruction of brain tissues or by electric stimulation was at a highly likely level.

The paradoxical overlap of brain areas yielding

In one of these areas, known as the “satiety center,” destruction of tissues caused animals to overeat and become obese. Careful studies of this phenomenon convinced scientists that this center was normally involved in the termination of eating behavior after the animal had become satisfied. Whereas lesions caused eating to go on and on, stimulation in or near this center when it was not lesioned caused eating behavior to stop.

In a nearby area lies the second center related to feeding, where lesions cause the animal to stop eating altogether; unless the experimenter takes special care, , these lesions cause the animal to die of starvation. Stimulation in or near these feeding center points causes the animal to eat voraciously during the period of stimulation.

Prior to the entry of our work into this field, I believe a relatively simple interaction was assumed. One view was that when neurons in the lateral feeding center became excited, a state of high drive (an aversive condition for the organism) goaded the animal into eating behavior. The ingestion of food, on the other hand, would be detected by receptors in the mouth and in the stomach, and would also modify the chemical state of the blood, and this information would be processed and projected to the medial satiety center, where it might be supposed to cause a positive state of the animal, and to inhibit the aversive lateral drive mechanism.

One of the most surprising findings to date, and one which has dramatically changed the concept of the control of eating behavior, and therefore the control of obesity, was the discovery that stimulation of the feeding center was among those yielding the strongest selfstimulation behavior and on all of our measures the strongest kind of positive rewarding effects. As you might or might not guess, the electric stimulation that caused satiety–and therefore was expected to cause bliss–did not induce any positive reactions at all, but rather turned out to be one of the areas where electric stimulation produced prompt aversive or withdrawal reactions.

So we no longer see hunger as a simple aversive mechanism, or as an aversive mechanism at all. Instead we would say that eating is a positive feedback mechanism. That is, eating behavior, once triggered, tends to continue. Eating begets eating, and once it gets going it has a marked tendency to intensify itself. This is experienced from the point of view of the Epicurean subject as a very satisfactory state of affairs. However, it can be a very sad state of affairs for the person who wants to lose weight and has weak limiting centers. Because food engenders a self-reactivating drive, we now know that the main cure for this (main) kind of obesity is simply to get and keep the patient away from food stimuli.


It appears that there are two limiting centers bearing on the feeding process. The satiety center meters the input and converts the process from rewarding to neutral to aversive. A starvation center produces an aversive mechanism when the animal’s food supply reaches a danger level.


We conceive of two limiting centers brought to bear on the feeding process. One is the satiety center, which meters the input in one way or another, and gradually or abruptly converts the process from a rewarding one into a neutral and finally into an aversive one, so that eating which was rewarding at the beginning is negatively reinforcing at the end. We are now convinced that there is another center which comes very little into play in the white albino rat, or in the majority of the American college students, and this we might think of as a starvation center. We have only recently found intimations of this; it is not too far displaced from the other two long-known topographic entities related to feeding. Within this area electric stimulation causes eating behavior, but in this case the stimulation is either neutral or aversive in its effects on behavior.


Creatures with hoards–the rat’s food pellets or a man’s fat–have been able to survive the lean years. The animal that waits to eat until he is starving is always living on the edge of demise.


We now conceive of hunger as being instigated either by accidental encounters between the subject and the succulent stimuli emanating from the food, or, barring that, eventually triggered by an aversive mechanism brought into play when the animal reaches a danger level so far as food supplies are concerned. In either case, once the eating mechanism has been triggered, it moves forward under its own power and would go on indefinitely if other extraneous control devices were not brought to bear. The satiety mechanism of the medial hypothalamus represents precisely this kind of a control device.

This research formed the model for a set of researches on the other drives; now a drinking center and a sexual center have been added to the array of vague entities in the lateral hypothalamus. In these regions other consummatory behaviors are triggered by electric stimulation, and a common denominator among all of the drive centers so far discovered has been that the electric stimulations there are also yielding positive rewarding effects on behavior.

So we assume that positive emotional mechanisms are indeed involved in the control of behavior–but why do they exist? Why were animals not just as fit to survive if they were nudged ahead by their pains and their deficits?

I believe the clue lies in our analysis of feeding mechanisms. Why does the animal keep on eating after the starvation trigger is gone? Why does the animal start eating even if he is not starving, but is only stimulated by the sight or smell or taste of food? The answer could well be that creatures with hoards, whether these were laid up at home as the rat hoards pellets in his home cage, or laid up within the animal’s body as man often keeps his pounds of fat on his midriff, were able to survive the lean years. Therefore, in relation to certain objects not so plentiful that they would be available when needed, mechanisms for hoarding promoted the survival of the species. The abstract animal (who never lived so far as I can make out in phylogenetic history) who waited until demise was imminent and then began looking to satisfy his need was on the edge of demise at all times. You might say he was “just” living. His lucky cousin, who is no abstraction, stocked up his larder during the fat years, in preparation for the lean ones, and you might say he enjoyed it. Instead of “just” living, the positive reinforcement creature was really living.


the least. This is the principal difficulty in the evaluation of this subtle book, as in the evaluation of mysticism as a whole.


^■Richard Taylor, With Heart and Mind (New York: St. Martin’s Press, 1973). Proem plus 147 pages. All numbers parenthesized in the text refer to pages in this book.


What if there were a shortcut to true happiness? What if true happiness could be provided artificially? Would it be good to take the shortcut? In a recent contribution to Thought, William Davis argues that if there were such a shortcut, it would be good to take it; Davis quickly dampens get-happy-quick schemes, however, by suggesting that the envisioned shortcut will probably never be found. 1 Davis’s argument emerges from a thought experiment. The first step is to imagine a pleasure helmet–a machine which would attach to the brain and simulate various neural impulses from the body which produce pleasurable sensations, a machine capable of administering «a jolt of great pleasure every second on the second. » Such a machine could, provide much happiness, but it provides no shortcut to true happiness. The pleasure helmet fulfills only the basest of human needs. Even intensely pleasurable impulses must fail to fulfill human needs «for aid venture or for variety, or for knowledge or love or creativity. [The pleasure helmet] might quench those needs, but that’s not the same as fulfilling them. »

But the thought experiment goes «one horrible step further. What if… a super pleasure helmet could be developed which not only gave

Professor Rickertsen is in the Department of Philosophy, Tuskegee Institute, Tuskegee, Alabama.


us great sensual pleasure, but also fulfilled all our deeper needs and gave us a deep sense of satisfaction?. . . Can we possibly claim that this is anything but good?» Davis grants a deep-seated human prejudice against giving into artificial satisfaction of one’s deepest needs, but Davis maintains that the prejudice is due to past experience with imperfect artificial satisfactions:

These artificially induced experiences have taught us that in the long run they do not work; that they meet only superficial needs and even those only for a short while; that they are destructive of other and higher potentials; and that they are an escape from the reality which we hope may somehow be able to satisfy us fully.

But the super pleasure helmet is no ordinary artificial satisfier; it is not subject to the same sort of shortcomings. By hypothesis, the super pleasure helmet fulfills our deepest needs, so it must be good. By hypothesis, the super pleasure helmet provides everything that we could possibly ask for. «Let’s face it. . . . This is what we want. »

The thought experiment is interesting. Davis’s main thesis is not. When the rhetoric is peeled away, a trivial claim remains: If there were a machine that could fulfill all our deepest needs, that machine could fulfill all our deepest needs; in other words: Suppose that someone invented a machine that could give us everything we really want in life. Wouldn’t that machine give us everything we really want in life?

Davis’s other point is that it is not likely that a super pleasure helmet could ever exist, since «it is not likely that such cheating of reality is possible. » As Davis points out, all of our past experience with cheating reality supports that claim. I think that Davis is right in this last claim. But what Davis suggests is that the super pleasure helmet is a practical impossibility. I want to press a stronger claim by arguing that it is a conceptual muddle.

Davis is a sort of super hedonist. The satisfaction of one’s deepest needs is, for him, a matter of attaining a certain psychological state; that is, of feeling in all respects as if one has just done something that would really (not artificially) satisfy a deep need. Davis’s pessimism about the possibility of a super pleasure helmet derives from the belief that it will never be possible to develop a machine which can bring about the required psychological states.

I disagree with Davis’s super hedonism. I believe that there are legitimate and important cases of needs which cannot be fulfilled merely by attaining a certain psychological state. If I am right, then super hedonism is false; and since the super pleasure helmet is supposed to be a machine that fulfills man’s deepest needs by artificially arousing certain psychological states, if super hedonism is false, the super pleasure helmet is logically impossible.

Why suppose that super hedonism is false? Consider two of my needs that are fundamentally different. I have a need for sexual gratification ^ which is satisfied when I experience sexual intercourse. I have a need to make my family happy which is satisfied when I make my family happy. Satisfying the former is a matter of achieving a certain psychological state:        If I can be made to feel as if I am ex

periencing sexual intercourse, whether I am actually participating in sexual intercourse or not, my need for sexual gratification will be fulfilled. However, even if I can be made to feel in all respects as if I am making my family happy, my need to make them happy is not satisfied unless I really do make them happy. The point is that satisfying needs is not in every case merely a matter of gaining a certain feeling; sometimes satisfaction of a need depends upon something in the world outside of the individual. No pleasure helmet, no matter how super it is, can satisfy the latter sort of needs.

Consider needs and desires. We can distinguish between mediate desires and ultimate desires. Mediate desires always occur in the context of a belief and some further desire. My desire to eat a chocolate is a mediate desire; I desire the chocolate because I desire (further) a certain pleasurable feeling and I believe that I can achieve that feeling by eating the chocolate. Ultimate desires are desires that are not mediate. The desire for the pleasurable feeling is an ultimate desire. It is not the case that I desire it because I believe that it will bring about something else that I desire. I simply desire it.

Typically, one wants his mediate desires satisfied because that will satisfy some ultimate desire. The raison d’etre of a mediate desire lies in its connection with an ultimate desire. If an alternative way of satisfying the ultimate desire becomes feasible, a way which precludes satisfying the original mediate desire, that is perfectly all right, since what is important is satisfying the ultimate desire. Satisfying the original mediate desire becomes unimportant; in fact, that desire disappears.

How do needs fit into the picture ? Needs have got to be related to ultimate desires. To fulfill a need is to satisfy an ultimate desire. There may be ultimate desires which do not constitute needs, because they are insignificant, or something like that, but I shall ignore them here because my arguments will concern only significant ultimate desires.

The un-super pleasure helmet fulfills some needs–some ultimate desires. But it leaves other needs unfulfilled. The possibility of a super pleasure helmet depends upon the nature of ultimate desires. Davis’s model of ultimate desires is the psychological state model: Davis implies that all ultimate desires are desires to attain a certain psychological state. On that model, my need to be creative is a need to achieve a feeling of creation. My desire to create is a mediate desire. I believe that by creating I can achieve the feeling of creation, and I desire that feeling. The psychological state model accurately describes a large group of needs; the question that I have tried to raise is whether it is true for all needs. In particular, is it true for the need to make my family happy ?

In order to show that the psychological state model does not accurately describe my need to make my family happy, let us suppose that it does, and consider the results. On the psychological state model, my need to make my family happy is really a need to achieve a certain feeling–a feeling of familial altruism, a feeling in all respects as if I have made my family happy. The desire to make my family happy is, on this model, a mediate desire: I believe that by making them happy, I can achieve the feeling of familial altruism, and I desire to achieve it. Thus, on the psychological state model, my need might be fulfilled even if my family never becomes happy, since it might be possible to artificially achieve the feeling of familial altruism.

Now, if the psychological state model is accurate, and if my desire to make my family happy really is a mediate desire, then as long as I fulfill the ultimate desire of achieving the feeling of altruism, it shouldn’t matter to me whether the particular mediate desire to make my family happy is fulfilled or not. Given a choice between having the feeling of familial altruism artificially stimulated in me, and gaining the feeling by pleasing my family, there should be no rational reason for choosing the latter over the former, if the psychological state model is accurate. Or further, consider these alternatives: Either I can have the feeling of familial altruism artificially stimulated in me, and thus be certain that the ultimate desire will be fulfilled, or I can live in the real world where I have only a reasonable chance of making my family happy, and thus, only a reasonable chance of gaining the feeling of familial altruism. If the psychological state model is accurate, then the rational choice is the former, since in that case it is a sure bet that my need will be fulfilled, and the rational person will choose in a way that will fulfill his ultimate desires–his needs. Certainly, I do want the feeling of familial altruism; the feeling is immensely pleasurable. But it is absurd to be forced into concluding that the rational choice for a person who has a need to make his family happy is the former. In this sort of case, even a slim chance in the real world is better than the perfect illusion. My need to make my family happy cannot be satisfied unless I really do make them happy. Thus, the desire to make my family happy is not just a mediate desire; it is an ultimate desire. But since it cannot be fulfHied by bringing about a certain psychological state in me, it cannot be fulfilled by a super pleasure helmet.

Let us extend this reasoning to one of Davis’s own cases. Consider the need for creativity. Davis maintains that if one feels as if he has created something of real value, and if he is made to believe that he has, then his need to be creative is fulfilled. I think that Davis is wrong. Suppose one is given the following alternatives: Either he can be guaranteed the feeling of creation and the requisite cognitive correlate, both by means of artificial stimulation, or he can have an even chance of achieving the feeling of creation and the belief that he has created. The choice is harder in this case, but there is no doubt that the latter is the rational choice. What that proves is that there is a connection between the need to be creative and the world beyond the individual. It is that sort of connection between needs and the real world that proves the super pleasure helmet to be a myth.


■^William H. Davis, ’’The Pleasure Helmet and the Super Pleasure Helmet,» Journal of Thought, 10 (November, 1975), 290-293. All quotations below are from this work.

21 am construing sexual gratification as purely sensual pleasure which ordinarily is derived from appropriate sex acts.