2003 IFA Congress: Montreal, Canada

Easy, Ethical Efficacy 2003

Bruce P. Ryan
Communicative Disorders, 1250 Bellfloweig California State University, Long Beach, California, 90840

SUMMARY

Ryan (2001a, 2001b) suggested that it was ethical and possible to evaluate treatment for stuttering using a 10-point rating scale for each of the three dimensions of: (a) pre- posttesting of stuttering, (b) description of procedures, and (c) length of treatment in hours to achieve 30 total points. The revised scale was discussed and explained to the audience. who were given 1 of 5 treatment procedures to rate using the scale during the latter half of the presentation. The results were collected from the audience and analyzed to reveal that certain treatments scored better than others on the scale.

  1. Background
There are two important ethical treatment prescriptions of ASHA (2003): (a) “that individuals should deliver services competently,” and (b) “individuals shall evaluate the effectiveness of services rendered and products dispensed and shall provide services or dispense products only when benefits can reasonably be expected.”

One interpretation of these is that the clinician should develop or discover and use an efficacious (effective and efficient) procedure. Another is that authorities who develop and write about treatment should test the procedure carefully and provide enough information about the procedure to permit evaluation of its effectiveness and efficiency. There is much current interest in efficacy in the treatment of stuttering (e.g., Brutten, 1993; Conture, 1996; Cordes, 1998; J. Ingham & Riley, 1998; Robey & Schulz, 1998, Ryan 2001a, 2001b), but most online clinicians are extremely busy and of :en do not have the time or the knowledge to evaluate the efficacy of their own or others’ procedures. There are few simple, but valid and reliable, procedures for evaluating treatment programs.

Using a 30-point Easy, ethical efficacy (EEE) rating scale (see Appendix) based on concepts from Ryan (1974, 2001b) and Ingham and Riley (1998), Ryan (2001a) evaluated 13 treatment programs (8 children and 5 adults). The rating scale did discriminate among the programs evaluated. The children’s programs averaged 13.3 while the adult programs averaged 14.8. The highest rated children’s program was the Lidcombe (26 out of a possible 30) (Onslow, Andrews, & Lincoln, 1994) and the highest rated (21 out of a possible 30) adult treatment programs were those of Kully and Langevin (1999) and Ingham (1999).

The purpose of this presentation was to (a) give participants an opportunity to learn and apply this rating scale through examination of one of five treatments (preschool, or one of two schoolage, or one adolescent and adult, or the participant’s own) and (b) report the ratings of the participants for the various treatments to determine its ease of use and reliability. Such a rating scale should help clinicians choose and practice efficient and effective treatments for people who stutter.

  1. Method
After being asked to volunteer, 33 participants at the oral presentation of this material at the 2003 International Fluency Conference in Montreal (August, 2003) requested a folder of materials, indicating their desire to participate as a judge. Copies of the form itself, only, was also distributed to those (n = 15) who said they did not want to serve as a judge. During the first 15 minutes, copies of the scale (see Appendix) were distributed to the audience for training in use along with a copy of one published article about one of five treatments or a form for evaluation of self-own program. The author then instructed the judges in the logic and use of the scale.

The basic logic of the evaluation system was presented:
  1. Assessment of stuttering pre and post the treatment phases of Establishment, Transfer, and Maintenance, and Follow-up (ETMF) presented in numbers representing objective measures with reliability (e.g., < 1 %SS = 1.3 %WS = 2.0 SW/M ) on a number of clients.
  2. Book or manual of description of treatment in enough detail to permit replication. (e.g., that of Onslow & associates, 2001 available from the intemet). Procedure lends itself to clear description. Example, “Have child read one word fluently and then say “good”â is very clear. “We counsel the parents” is not very clear because both content and form of that counseling are difficult to describe, much less replicate (Ingham & Riley, 1998). For preschoolers, procedures must describe recognition of and control for spontaneous recovery (e.g., repeated measures and trend analysis, Ryan 2001b).
  3. Time of treatment including number of hours, and/or days, and/or months and/or years. Hours are more exact, hence preferable, than months or years. Many procedures may work, but those whose treatment time is shorter would be more desirable.
Next, for 15 minutes each participant independently (no discussion) evaluated one of the five treatments using a copy of the article referenced: (a) the Lidcombe (Onslow et al., 1994), or (b) Successful communication for schoolage children (Yaruss & Reardon, 2002), or (c ) the Monterey Fluency Program (Ryan & Ryan, 1995), or (d) the MP1 (Ingham, et al., 2001), or (e) their own (self-described). These first four procedures were chosen because they were current and provided for a cross section of treatments for children and adults or encouraged and permitted some of the participants to describe and evaluate their own treatments. Those participants who evaluated their own treatment had to write a 100-150 word description of their procedures on the form given, provide simple information about themselves and their treatment (e.g., years of experience, number of clients) and rate themselves. At the end of 15 minutes, all participants turned back their filled-in copy of the EEE Rating Scale and article or self-own treatment description form. Their ratings were accumulated by the author for each of the five possible treatments and reliability percentages among pairs of the judges calculated for the four treatments other than the self-generated one for which there was only one judge per treatment.

  1. Results and Discussion
Of the 33 volunteer participants, only 16 (48%) judges turned in completed materials. It is not clear why 17 others first took the materials and then were unable to complete the task. One can only guess that (a) the prospective judges had limited English (a number of different countries were represented in the audience) or (b) the task was too daunting once the judges had seen the material. The incomplete materials were relatively evenly distributed among the five groups except for the self-generated which was five for five incomplete. The results of these ratings and reliability for the 16 participating judges who completed the form are shown in Table 1. The three behavioral, evidence-based procedures scored the highest and similar to each other (less than 1.5 pts separating the two most disparate total mean scores) which is consistent with their designation as evidence- based procedures which have been shown to be effective and efficient. Ryan (200la) had reported a score of 26 for the Lidcombe Program employing slightly different source materials. The Yaruss and Reardon (2002) treatment suggestions rated the lowest of the four which were scored. This is to be expected. One judge reported “no data,” so that the procedures could not be evaluated. There were no efficacy data in the article itself, nor in any of the references of the article. This situation made the point that it was difficult, if not impossible, to rate any treatment on this scale that did not have any efficacy data. Further, it should be noted that, to the author’s knowledge, there are no treatment efficacy data extant for the large majority of the procedures suggested by Yaruss and Reardon (2002) as noted by Cordes (1998) in an analysis of similar procedures. In the author’s opinion, Yaruss and Reardon, as should any authors of treatment procedures, should have presented efficacy data somewhere to support their suggestions for treatment, else some might consider them in violation of the two ethical principles (competency and evaluation) discussed above. The report did receive some points, rightfully so, for the description of the procedures ( M = 3.5). B. Description is the most subjective of the three areas. It is difficult, if not impossible, to reliably rate this element of the scale, especially in the case of procedures such as those presented in Yaruss and Reardon (2002).

In retrospect, too much in too short a period of time was expected from those who were selected to describe, in writing, their own program and then to also rate it (i.e., recalling data by memory). In the future, such participants may be either given or sent the two related forms (EEE Rating Scale, and description of own program) to fill in and rate their own procedures later, in their own setting, with their related data easily available and enough time to complete the task. When completed, they could mail the forms to the author in a self-addressed envelope provided them by the author.

Despite these limitations, and the question of validity (i.e., is speech fluency the single most important measure of efficacy and efficiency [Ryan, 2001b]?), the preliminary results are encouraging. Four of the judges’ groups were able to scan and rate their assigned procedures in 10 minutes or less with reasonable reliability and expected accuracy (i.e., high scores for evidence- bases procedures which offer data for evaluation, and low scores for nonevidence-based procedures [e.g, Yaruss & Reardon, 2002] which offered no treatment efficacy data). These results suggested that the EEE Rating Scale was indeed easy to use and reliable. The EEE scale may help clinicians evaluate their own present procedures, and if found wanting, select other, more effective and efficient procedures to permit them to be ethically (a) competent and (b) self-evaluative. Additional research is needed to determine the scale’s validity.

EAS_t1.png

Table 1. Results of Easy, Ethical Efficacy and Efficiency (EEE) Three-part 30-point Rating
Scale (10 X 3 = 0-30 possible) for Five Treatment Procedures for Preschool (1), Schoolage (2), and Adult (1) Clients Who Stuttered by Judges (the number varied per treatment).
aRel is the mean percent agreement reliability of 3-5 judges for each of the four treatments evaluated except for Self-Own program (no interjudge reliability possible).
bETMF = Establishment,transfer, maintenance, and follow-up
cComputed among total scores only.
dMany zeroes obviated computation.
eJudges unable to find existing data within time limits. Author scored.
fAdjusted for varying size of n’s.

References
ASHA (2003) Code of ethics. http:www.professional.asha.org/resources/deskrefs.

Brutten, G. (Ed.) (1993). Proceedings of the NIDCD Workshop on Treatment Efficacy Research in Stuttering, September 21-22, 1992. [Special Issue]. Journal of Fluency Disorders 18.

Conture, E. (1996). Treatment efficacy: Stuttering. Journal of Speech and Hearing Research, 39, S18-S26.

Cordes, A. (1998). Current status of the stuttering treatment literature. In A. Cordes & R. Ingham (Eds.), Treatment\ eflicacya for stuttering: A search for empirical bases (pp. 117-144). San Diego: Singular Press.

Ingham, J. & Riley, G. (1998). Guidelines for documentation of treatment efficacy for young children whostutter. Journal of Speech and Hearing Research, 4], 753-770.

Ingham, R. (1999). Performance-contingent management of stuttering in adolescents and adults. In R. Curlee (Ed.), Stuttering and related disorders of fluency (2nd ed.) (pp. 139-159). New York: Thieme.

Ingham, R., Kilgo, M., Ingham, J ., Moglia, R., Belknap, H., & Sanchez, T.(200l). Evaluation of a stuttering treatment based on reduction of short phonation intervals. Journal of Speech, Language, Hearing, andResearch, 44, 1229- 1244.

Kully, D. & Langevin, M. (1999). Intensive treatment for stuttering adolescents. In R. Curlee (Ed. ) Stuttering and related disorders of fluency, (2 ed) (pp. 139-159). New York: Thieme.

. Onslow, M. , Andrews, C., & Lincoln, M. (1994). A control/experimental trial of an operant

treatment for early stuttering. Journal of Speech and Hearing Research, 3 7, 1244-1259.

Onslow and Associates (2001). Manual for the Lidcombe Program of early stuttering intervention. Retrieved September 7, 2001, from http://www.cchs.usyd.edu.au/ASRC/.

Robey, R. & Schultz, M. (1998). A model for conducting clinical-outcome research: An adaptation of the standard protocol for use in aphasiology. Aphasiology, 12, 787-810.

Ryan, B. (1974). Programmed therapy for stuttering in children and adults. Springfield: IL: CC.Thomas

Ryan, B. (2001a). Easy, ethical efficacy 2000. In H-G. Bossardt, J. S. Yaruss, & H. Peters (Eds. ), Fluency disorders: Theory, research, treatment, and self-help. Proceedings of the Third World Congress on Fluency Disorders in Nyborg, Denmark (pp. 354-358). The International Fluency Association. Nijmegen, The Netherlands: Nijmegen University Press.

Ryan, B. (2001b). Programmed therapy for stuttering in children and adults (2nd ed.). Springfield: IL: CC. Thomas

Ryan, B. & Ryan, B. (1995). Programmed stuttering therapy for children: Comparison of two establishment programs through transfer, maintenance, and follow-up. Journal of Speech and Hearing Research, 38, 61-75.

Yaruss, J .S. & Reardon, N. (2002). Successful communication for children who stutter. Finding the balance. In J .S. Yaruss (Ed.), Facing the challenge of treating stuttering in the schools:

Selecting goals and strategies for success, Part 1, (pp. 195-203). Seminars in Speech and Language, 23. Section 4. Outcomes and Efficacy of Interventions I 77

APPENDIX

Easy, Ethical Efficacy (EEE) 30-Point Rating Scale for Stuttering Treatment Efficacy and Efficiency

EAS_a1.png

aETMF = establishment, transfer, maintenance, and follow-up.

bDo not write in this space. Put total pointa earned out of 30 possible points in lowest, farthest right-hand space.

May also divide this score by 3 to translate score to 10-point scale (e.g., 25/3 = 8.3 on 10-pt scale) Specific Instructions On How To Rate Using EEE 30-Point Rating System

  1. Pre/post test (Total of 10 pts possible).Write points in Pts column across from item.
  2. %SS (1.0. best) or % SW (1.3) or SW/M (2.0). Mean (average) percent stuttered syllables - or words or stuttered words per minute. Look for specific metric. Give 4 pts, if less than < 1.0% SS post or after any treatment completed. Rating scales = 0.
  3. If there is a reliability measure (inter or intrajudge, r or percentage agreement), give 1 pt.
  4. If the test from 1 (%SS or %SW or SW/M) was postestablishment and transfer, give 1 pt.
  5. If the test from 1 was given post maintenance, give 1 pt.
  6. If the test from 1 was given at follow-up, score 1 pt.
  7. If the number of clients was more than 3, give 1 pt.
  8. If there was some objective measure (a score) on an attitude scale or interview, score 1.
  9. Other. If no other scores above, score 1 for (a) “20 percent got better, “ (or similar statements) or (b) use of rating scale, or (c ) speaking rate data.
    Add up all points in Pts column to get total. Total score may NOT exceed 10 pts.
  1. Description of procedures (Total of 10 pts possible)
  2. Specific, clear descriptions or instructions on how to do this procedure. This is admittedly subjective. Rate this on a 1-5 scale from (1 unclear to 5 very clear).
  3. If there is a manual, give 1 pt.
  4. If the program discusses establishment, give a point. ( Must be more than “Do establishment”)
  5. If the program discusses transfer, give a point. (Must be more than “Do transfer”)
  6. If the program discusses maintenance, give a point. ( Must be more than “Do maintenance”)
  7. If the program discusses follow-up, give a point. (Must be more than “Do follow-up”)
  8. Other, if less than 10 points already given, and treatment deals with preschoolers, give another point for control for spontaneous recovery (3 measures of stuttering pretreatment demonstrating up or down trend. If 10 points already given, but no control for spontaneous recovery, subtract 1 pt).
    Add up all points in Pts column to get total. Total score may NOT exceed 10 pts.
  9. Efficiency (Total of 10 pts possible)
  10. If treatment time is given in hours, give points for hours (e.g., less than < 10 hrs = 10 pts).
  11. Weeks of treatment. If less than 52 weeks = 1 pt. If more than 52 weeks, = 2 pts.
  12. Months of treatment. If less than 24, = 2 pts. If more than 24, = 1 pt.
  13. Other, if less than 10 points already given, and treatment time is described other than any above, such as only “two times a week,” and nothing else about time given, = 1 pt.
    Add up all points in Pts column to get total. Total score may NOT exceed 10 pts.
    Add up all points in three Pts columns to get grand total (A + B + C = grand tota) which may not exceed 30 pts. 

Translation

The IFA implemented Japanese translations of some pages on the site for the 2018 Joint World Congress. Choosing Japanese below to see these translations.

Not all pages are translated, but you can use Google translate to see a machine translation using the switch below

Google Translate

Follow the Joint World Congress