Burn Your Text Books! Evidence-Based Practice in Stuttering Treatment
Barry Guitar University of Vermont, Burlington, VT 05401-0010
Stuttering treatment has had a colorful history. Methods of therapy have ranged from Demosthenes’ pebbles to Dieffenbach’s scalpel, from little gold forks in the mouth to little delayed feedback devices in the ear, from exhortations encouraging ﬂuency to admonitions advising more stuttering. Are any of these methods effective? And what do we mean by “effective?” How do we measure it? These are the questions that researchers in the field of stuttering management have been asking for decades. Now the questions should be asked by clinicians, as a habit of their practice. Evidence-based practice is essentially a set of principles for clinicians to use in evaluating and treating their clients. The definition given by one of its developers is that evidence—based practice is “the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients.” (Sackett et al., 1996, page 71) The term “current best evidence” includes the latest journal articles and on-line sources of scientiﬁcally sound information on evaluation and treatment techniques. Textbooks can be dangerously out of date, unless they are revised very frequently and teach the reader the latest tools for searching the literature. In this paper I will describe they why’s and Why-not’s of evidence—based practice and then make recommendations for clinicians who are interested in following the principles of this philosophy.
- Why (Burn Your Textbooks)?
This section will discuss reasons why clinicians need to question the efﬁcacy of the diagnostic and treatment techniques, with examples how evidence can affect practice.
- Is it “Pluttering, Skivering, or Floggering?” (Bloodstein, 1990)
Every practicing stuttering clinician wants to be able to accurately diagnose and evaluate an individual who comes into her treatment room asking for help. The above quote by Bloodstein is a reminder that while we can call it anything we want, we usually know what stuttering is. But there are exceptions to being able to easily decide if it’s stuttering. These exceptions include children with high levels of normal disﬂuency that will diminish as the children grows older and patients with a sudden onset of disﬂuency associated with neurological or psychological problems. As new research establishes better methods of making the distinction between normal disﬂuency and stuttering, clinicians need this information quickly and need to know that it is scientiﬁcally sound.
Clinicians need also to know what characteristics, especially in children, predict natural recovery, so that clinical resources can be saved for those clients who would not overcome their stuttering without treatment. Not only do we need evidence of who doesn’t need treatment, we need data on what kinds of treatments are most effective for which children and adults. For example, some of my early research (Guitar, 1976; Guitar & Bass, 1978) was aimed at uncovering characteristics of clients who would and would not beneﬁt from intensive prolonged speech treatment. There are good examples of the evidence needed by clinicians are two papers presented here at the IFA conference. One compares the effectiveness of two treatments for stuttering in preschool children (Franken et al.) and another assesses the ﬁve-year outcome of an intensive program for teens and adults (Langevin & Kully)
A superb example, from the past, of evidence crucial for clinicians is a study by Hixon and Hardy (1964) which examined whether non-speech tasks were useful in determining articulation ability in children with dysarthria. At the time this study was performed, many clinicians evaluated and treated dysarthric children by having them perform non-speech gestures with their articulators. These authors showed, via multiple-regressions analysis that speech tasks--but not non-speech tasks—-were predictive of articulation ability. The evidence from this study informed Clinicians not to waste their clients’ time and money with tedious evaluations involving many non-speech tasks and numbing treatment routines involving non-speech drills. Unfortunately, this study came too late to save a friend of mine who was born with a cleft palate. Throughout his school years he was admonished by his speech clinician to work harder so that he could be successful at blowing a softball across a table. After many years of being hectored by his clinician for his inadequate softball—blowing performance, my friend asked the clinician to demonstrate how a person with normal velopharyngeal closure would do it. One try by the speech clinician made it clear to my friend that even a normal speaker couldn’t blow the softball across the table and he quit speech therapy in disgust. Only later did he attain essentially normal resonance and articulation, when he was in college. This was with the help of a speech clinician who had evidently kept up with the literature and worked directly on his speech rather than on non-speech drills.
- She gets all A’s, so she doesn’t need stuttering therapy
School children who stutter are falling off clinicians’ caseloads — in epidemic proportions! In the U.S., for example, special education laws mandate that all children with disabilities be taught the same curriculum as children without disabilities. But funding to reach this goal has never been forthcoming. Because of this and because of the fact that children who stutter often behave well in class and get by academically, these children have been dropped by school clinicians. It is a sizeable problem in elementary grades and a giant problem in junior high and high school. To counteract this trend, evidence is needed to demonstrate that for many school children their stuttering impedes their ability to communicate. That thus the ultimate goal for stuttering treatment is freedom to communicate (Conture & Guitar, 1993; Kamhi, 2003). This evidence needs to come from valid and reliable measures that assess the areas of communication in which the child who stutters is less adequate than the child who doesn’t. This evidence, as well as research indicating that stuttering treatment can improve the child’s communication abilities, could be powerful tools to convince school ofﬁcials that the needs of children who stutter are as great as those of other children whose language disabilities limit their reading skills or whose autism restricts their access to education. The school clinician needs to seek out valid and reliable tools for measuring communication adequacy and to develop aspects of treatment that address each child’s deﬁcits. This can be an important result of evidenced—based practice.
- Does Deep Breathing Cure Stuttering?
Another important beneﬁt of evidence-based practice is that it helps clinicians maintain an awareness of the evidence for and the evidence against various new treatment approaches. This is useful because new treatments for stuttering are developed every year and some are touted as being practically miracle cures. As a result, when new treatments are publicized in the media, clinicians are bombarded by questions about whether the treatment is as good as it sounds. These questions can be more easily answered if the clinician has a good handle on the treatment evaluation literature. To illustrate, I’ll take a “new” approach that was introduced several years ago. Azrin and Nunn (1974) and Azrin et al. (1979) presented data indicating that a regulated-breathing treatment for stuttering, given in one to two sessions each lasting two to three hours, was effective in eliminating stuttering. Several years later, an independent treatment facility evaluated this approach and presented evidence against the earlier claims of its success (Andrews & Tanner, 1982). This follow-up study by another laboratory informs the clinician with an evidence-based practice that regulated breathing may not be a simple cure. A recent example of the need for evidence is the new SpeechEasy device that was dramatically demonstrated on the Good Morning America and Oprah television shows. In a letter to the ASHA Leader, Roger and Janis Ingham questioned the ethics of these media stunts in the face of the absence of scientiﬁc evidence for the device’s effectiveness. A supporter of the device wrote back, saying: “In a recent letter to the editor, Roger and Janis Ingham suggested that the lack of peer- reviewed data on the SpeechEasy made distribution of the device potentially unprofessional. While their concerns hold validity, their arguments may fall short of encapsulating a holistic perspective of stuttering management. .....With more than 100 peer-reviewed and published articles detailing the ﬂuency-enhancing effects of speech feedback, the potential beneﬁts of early distribution may outweigh the risks.” (Snyder, p. 35, 2003)
The clinician with training in evidence—based practice will see the illogical deduction in the last sentence and join the chorus of concern crying out for data on this approach.
It is important to make a distinction between calling for evidence of treatment effectiveness and dismissing a treatment out of hand. The responsible clinician remains open-minded about the possibility that new treatments — even those that appear to be “miraculous” — may be effective, at least with some clients. The SpeechEasy device is a good example of this, too. Lisa Trautman and Carroll Guitar, working with the Stuttering Foundation, have surveyed individuals who wrote to the Foundation asking for information about electronic devices. Of those who responded to the survey, 10 purchased the SpeechEasy device. Of these 10, 4 liked the device, 3 didn’t, and 3 had mixed reactions. Most of those surveyed had used the SpeechEasy for 3 months or less, thus a follow—up to the survey is vital (and is now underway). These responses are much like those I’ve heard informally from individuals and family members who have had experience with SpeechEasy. Some ﬁnd it useful; others don’t. Predictably, the device seems to be more beneﬁcial for adults who realize its limitations. Parents who buy it for their children seem to be disappointed with their child’s resistance to wearing it. My own observations are very weak evidence for or against this approach to treatment. Clearly what is needed are scientiﬁc studies by independent laboratories, assessing the treatment’s effectiveness with many clients, including both children and adults.
A good example of the appearance of a new treatment that was questioned by many clinicians when it ﬁrst appeared is the Lidcombe Program. When it was ﬁrst presented at national and international meetings, the way it was portrayed — as the only ethical (evidence—based) approach to use with preschoolers - raised the hackles of a number of clinicians and made them less than totally receptive. Gradually however, the evidence for its effectiveness piled up in nine articles supporting its efﬁcacy and effectiveness. Still, the fact that the nine articles are all published by a small group of nonindependent clinical researchers is of concern. However, at this very IFA Congress there is a randomized clinical pilot study by a group at the University of Leiden in Netherlands that compares the Lidcombe with a treatment that focused on changes in the environment. These studies by independent groups comparing two or more treatments with careful research designs are the very essence of building the evidence for responsible clinical practice.
- Is it “Pluttering, Skivering, or Floggering?” (Bloodstein, 1990)
- Why Not (Burn Your Textbooks)?
This section will present a number of weaknesses in the evidence—based approach to practice.
- All that Glitters. . ..
It has been suggested that systematic reviews, especially meta-analyses (systematic reviews which use statistical techniques to assess treatment effect size), are among the most useful sources of evidence for the beneﬁts of various treatments. Systematic reviews attempt to bring together a large number of studies (thus increasing the number of participants involved in whatever is being assessed), critically analyze them, and summarize the valid and reliable ﬁndings. Sackett et al. (2000) suggest that good systematic reviews will indicate how the search was done, will report separately 24 Theory, research and therapy in ﬂuency disorders on the results of randomized trials versus non—randomized trials, will search all languages, will include more than journal articles (e. g. proceedings of congresses and even unpublished reports) and will use studies that show individual data, rather than merely summaries of data.
In the ﬁeld of stuttering Ieatment, it appears that only two meta—analyses have been conducted. The ﬁrst, by Andrews et al. (1980), searched journals in the ﬁeld, review articles of stuttering treatment, conference proceedings, dissertations and other unpublished studies. They found 42 studies that met the criteria describing the treatment procedures, of having at ‘east 3 subjects, and of presenting means and standard deviations of the treatment data. This study was a good systematic review for its time, but had several weaknesses. First, only the literature in English was searched; second, the stipulation that means and standard deviations had to be available excluded single subject designs, including the present author’s study of EMG biofeedback treatment (Guitar, 1975).
A second meta—analysis of stuttering treatment studies, by Thomas and Howell (2001) was not a comparison of treatment outcomes, but instead an analysis of whether or not treatment efﬁcacy studies published between 1995 and 1998 met Moscicki’s (1993 criteria for such studies. This systematic review of methodology was also plagued by the inevitable problem of not including all treatment studies during the designated time period (cf the criticism by Ingham & Booth, 2002 and the response by Howell & Thomas, 2002).
- Just Because It’s Easy to Measure.....
Evidence-based practice began in the ﬁeld of medicine and its emphasis on methodologically sound treatment outcome studies is certainly beneﬁcial to the health of our own profession. However, we should not lose sight of ecological validity. An important aspect of ecological validity is the extent to which treatment is changing things that really matter and changing them by meaningful amounts. As I suggested earlier, many in our profession — but not all — believe that an important goal in stuttering treatment is to help the child communicate more effectively (Conture & Guitar, 1993; Kahmi, 2003). While efforts are underway to develop valid and reliable measures of this skill, the profession is not there yet and thus there is no valid and reliable evidence (so far as I could tell with a quick search) on treatments that result in improving a child’s communication. But there is evidence that treatments that measure frequency of stuttering show valid, reliable, and signiﬁcant changes in that variable. Does that mean tha: clinicians should not use treatments such as the Communication Skills Approach (Rustin et al., 1995) because evidence for its effectiveness is lacking? In her critical appraisal of evidence—based practice, Trinder (2000) highlights the problem that many professions face because of a lack of high—level evidence in the face of pressing‘ clinical problems. Should we withhold all treatment because reliable and valid evidence of treatment effectiveness is missing? I think not.
- The “Arlo Dunkleburger” effect
When I was a young man looking for work to support myself while I was in stuttering therapy in Kalamazoo, Michigan, I became a Fuller Brush Salesperson. The very nice man who trained me — Arlo Dunkleburger — took me out on my route and demonstrated how to sell brushes and accompanied me as I learned my new profession. That week in which Arlo Dunkleburger accompanied me, I earned a princely salary, but the following week and in subsequent weeks, working alone, I was far less effective and my pay packet grew increasingly slim. I had encountered the problem that many laboratory—developed treatments are faced with. It works great in highly controlled conditions but out in the ﬁeld, delivered by honest but over-worked clinicians, the treatment may not be nearly so effective.
This problem may come from a number of sources. The tightly controlled conditions when the original studies were done may not be easily duplicated in the real world where most clinicians work. Moreover, clinicians who try to use the approach may not get enough training to deliver the treatment_ in just the way it was given under tightly controlled conditions. Even with training, they may still not be delivering treatment as it was originally given (cf. Ryan & Van Kirk Ryan, 1983; Thomas & Howell, 2001). This problem may be solvable if enough resources and time are allocated to it. I have often thought that if Arlo Dunkleburger had spent more time training me and following up, I might now be a major executive in the Fuller Brush Corporation.
- All that Glitters. . ..
- Whine Not
This section suggests that we save the energy that we might spend complaining about evidence-based practice to use in making it a useful, real—world-workable philosophy of evaluation and treatment.
- Searchin’ (a hat tip to the Coasters
Searching for best, most data-supported diagnostic and treatment approaches can be time consuming and frustrating, but it can lead to more effective practice. ASHA members have access to a bibliography on treatment efﬁcacy through the ASHA website: http://professional.asha.org/resources/noms.
Current references in stuttering evaluation and treatment are available free to the public through PubMed (http://ncbi.nlm.nih.gov/PubMed/..gov/PubMed/. Those with access to libraries which subscribe to various search engines can use MedLine and Psychlnfo to search the medical and psychological literatures.
When you see a salesman hawking a get-rich-quick scheme on television you immediately have doubts. Anything that sounds too good to be true, you assume, is just that — too good to be true. But evaluating the information you glean from internet discussion groups and from the publications you’ve turned up through your literature searches is harder. One formal way to evaluate evidence about treatment is through the well—know “Levels of Evidence” scale, that lists in order of trustworthiness the types of treatment studies:
Level I — Randomized clinical trial with low type I error and high power1
Level II — Randomized clinical trial with high Type I error and/or low power
Level III — Prospective comparison of nonrandomized groups
Level IV — Retrospective comparison of nonrandomized records and post—tx measure
Level V — Expert opinion; case studies
As an example of why randomized trials are so important, consider a treatment study by Perkins et al. (1974). This publication examined the outcome of two versions of rate control therapy. In one, clients were simply taught to slow their rate of speech to very low levels which induced ﬂuency, and then their speech rates were gradually shaped to more normal levels, maintaining ﬂuency. Generalization procedures were used to transfer the ﬂuency to the client’s everyday environment. In the other treatment, clients were taught not only to slow their rates, but to manage breathing, phrasing, and other aspects of speech production at slow rates and then to maintain these qualities as speech was shaped to a natural sounding level and generalized. The outcome measures made after treatment showed that the second approach lead to better outcome. However, if you examine the amount of stuttering present in each group before treatment, the ﬁrst group, which got the simpler treatment, had much more frequent stuttering before treatment. Because the amount of stuttering prior to treatment predicts the amount that will remain after treatment, the study was inherently biased. Random assignment to a treatment group would have eliminated this serious bias.
As you would expect, the lowest level of evidence is Expert Opinion and Single Case Studies. This is the potential problem with information gained from internet discussion groups. Nonetheless, this can be a place to begin. When someone on the internet recommends and new tool or new approach, you can then search the literature and appraise the evidence available.
Integrating means acting on the information we have gained from searching and appraising the literature, with regard to our clinical expertise and our client’s values and situation.
Let us ﬁrst tackle the issue of our clinical expertise. The challenge of new approaches is that evidence of their effectiveness is based on appropriate administration. Earlier, I referred to problems of clinicians not being adequately trained or simply not following the guidelines for treatment. For example, in a treatment which depends on the clinician consequating moments of stuttering, if stutters are ignored or missed by the clinician, the treatment will be less effective. In a treatment that depends on the client staying in the moment of stuttering until the tension has been reduced to a normal level, if the clinician allows the client to ﬁnish the stutter with abnormal levels of tension, the treatment will not work. Thus, proper training is imperative. Those who develop new tools and approaches would seem to be the appropriate individuals to offer training and follow-up guidance in its use.
Now let us consider the issues surrounding the client’s values and preferences. Some years ago a conference was held about treatment efﬁcacy research; one of the topics that emerged was whether the goal of stuttering treatment might be in terms of the ﬂuency of the client’s speech output or some internal experience that the client had when speaking. An invited commentator, Donald Baer who was well-known in the ﬁeld of applied behavior analysis, made the following observation:
It seems only reasonable to learn that when stutterers are given control of the therapeutic consequences [therapy strategies] that presumably can change their output [speech], some of them choose different targets than would their therapists or, probably, other stutterers, and some of them target not so much their speech output as they do a private response that they describe as sense of “imminent loss of control. (Baer, 1990, p. 35)
This comment highlights one of the tenants of evidence-based practice — that the clinician must consider what goals and procedures are desired by the client, or, when appropriate, the client’s family. This may be a decided advantage in treatment because clients or families may be more motivated when they play an ongoing part in choosing where to go and how to get there.
The process of working with the client to choose mutually agreeable goals and to reach them requires "a continuing balance between clinicians’ expertise and clients’ emerging understanding of themselves. Throughout treatment, clinicians communicate to the client that they are knowledgeable in stuttering and its treatment, that they can be trusted with clients’ welfare, that they are accepting but ﬁrm. At the beginning of treatment, clinicians may begin to educate clients by giving clients reading material, videos, or websites to visit and by discussing various approaches and goals. As treatment gets underway, clinicians can help clients understand their own frustrations, fears, hopes, and desires about ﬂuency and about communication. This can yield a fruitful discussion of what goals the client would like to choose.
The last step in evidence based practice is to evaluate how well the process has worked. This means ﬁnding ways of assessing and improving our searching, appraising, and -integrating the best evidence for our practice. But it means more. It also means developing our skills at evaluating our own diagnostic and treatment procedures. How do our outcomes compare with those described in published treatment outcome studies? Are we developing ways of measuring the kinds of outcomes we think are important? If we believe communication abilities are as important as ﬂuency, are we ﬁnding ways of validly and reliably measuring these?
- Searchin’ (a hat tip to the Coasters
1 The term “low type I error” means that the design and statistical analysis of the treatment make it less likely that you will ﬁnd a treatment effect when there really is none. “High power” means that there are enough subjects to detect the expected treatment effects ’ back to text
Andrews, G., Guitar, B., & Howie, P. (1980). Meta-analysis of the effects of stuttering treatment. Journal of Speech and Hearing Disorders, 45, 287-307.
Andrews, G. & Tanner, S. (1982). Stuttering treatment: An attempt to replicate the regulated- breathing method. Journal of Speech and Hearing Disorders, 47, 138-140.
Azrin, N. & Nunn, R. (1974). A rapid method of eliminating stuttering by a regulated breathing approach. Behavior Research and Therapy, 12, 279-286.
Baer, D. (1990). The critical issue in treatment efﬁcacy is knowing why treatment was applied. In L.B. Olswang, C.K. Thompson, S.F. Warren, & N.J. Minghetti (Eds.), Treatment efficacy research in communication disorders (pp. 31-39). Rockville, MD: ASHA
Bloodstein, O. (1990). On pluttering, skivering, and floggering: A commentary. Journal of Speech and Hearing Research, 55, 392-393.
Conture, E. & Guitar, B. (1993). Evaluating efﬁcacy of treatment of stuttering: School-age children. Journal ofFluency Disorders, 18, 253-287.
Conture, E. & Guitar, B. (1993). Evaluating efﬁcacy of treatment in stuttering: School—age children. Journal of F luen.cy Disorders, 18, 252-287.
Guitar, B. (1975). Reduction of stuttering frequency using analog electromyographic feedback. Journal of Speech and Hearing Research, 18, 672-685.
Kahmi, A. (2003). Commentary: Two paradoxes in stuttering treatment. Journal of Fluency Disorders, 28, 187-196.
Pietranton, A. (2003) Evidence Based Practice Paper presented at SID 4 Leadership Conference, St. Louis, Mo.
Sackett, D., Rosenberg, W., Gray, J., Haynes, R. & Richardson, W. (1996). Evidence based medicine: what it is and what it isn’t. British Medical Journal, 312, 71-72.
Sackett, D., Straus, S., Richardson, W, Rosenberg, W., Haynes, R. (2000) Evidence-based medicine: How to practice and teach EBM. New York: Churchill Livingstone.
Snyder, Greg. Prosthetic stuttering management. Letter to the editor. ASHA Leader, vol. 8 No. 13, July 22, 2003, p. 35.
Thomas, C. & Howell, P. (2001). Assessing efﬁcacy of stuttering treatments. Journal of Fluency Disorders, 26, 311-383.
Trinder, L. (2000). A critical appraisal of evidence—based practice. In L. Trinder with Shirly Reynolds (Eds) Evidence-Based practice: A critical appraisal. London: Blackwell Science.