Assessing the Ability of Medical Students to Perform Osteopathic Manipulative Treatment Techniques


Assessing the Ability of Medical Students to Perform Osteopathic Manipulative Treatment Techniques

  1. Michael Finley, DO
+Author Affiliations
  1. From the National Board of Osteopathic Medical Examiners in Chicago, Ill (Boulet, Gimpel); New York College of Osteopathic Medicine of New York Institute of Technology in Old Westbury (Dowling); and Western University of Health Sciences College of Osteopathic Medicine of the Pacific in Pomona, Calif (Finley).
  1. Address correspondence to John R. Boulet, PhD, Educational Commission for Foreign Medical Graduates, 3624 Market St, Philadelphia, PA 19104-2685. E-mail: jboulet@ecfmg.org

Abstract

While osteopathic and allopathic medicine share many commonalities, there are key practice-based differences that uniquely characterize the two professions. For osteopathic medicine, one such defining feature is the use of osteopathic manipulative treatment (OMT). Unfortunately, while various treatment modalities are taught in osteopathic medical schools, there has been relatively little work done to establish standardized evaluation protocols. The purpose of this investigation was to explore the use of OMT assessment in the context of a multistation standardized patient examination.
Analysis of performance data from 121 fourth-year osteopathic medical students indicated that the ability to do OMT can be reliably and validly assessed using a combination of simulated patient encounters, trained osteopathic physician raters, and an objective rating tool. Additional studies that incorporate a larger sample of students and focus on modifications to the assessment tool and rating protocols are warranted.
The use of performance-based assessments in medicine, especially those involving standardized patients, is widespread.1-3 These assessments, which usually involve a series of simulated medical encounters, have been shown to provide scores and assessment decisions with adequate psychometric properties.4-7 Unfortunately, depending on the purpose of the assessment, the content and structure of individual examinations may vary appreciably. As a result, care must be taken in the examination development process to ensure that the simulated clinical encounters are sound and that the evaluation tools (eg, checklists, rating scales) are appropriate for the group or groups being tested. More important, if content-specific skills are to be evaluated, the adequacy of the measurement instruments and resulting scores must be evaluated.
Standardized patient assessments have been developed to measure the clinical skills of osteopathic and allopathic medical students, residents, and physicians.8 For the most part, these evaluations use a mix of people—referred to as standardized patients—trained to portray common clinical maladies. While the primary focus of these assessments has been on teaching and formative assessment, they have also been used for summative purposes. In the United States, most allopathic medical schools have some form of standardized patient program. Likewise, many osteopathic medicine programs use objective structured clinical examinations and standardized patient–based methods for training and evaluation.9 Here, students are able to practice their skills, including history taking, medical communication, and physical examination, in a standardized environment with a simulated scenario and the limited possibility of adverse patient outcomes.
Osteopathic and allopathic medicine share many similarities, including areas of physician training and patient care. Nevertheless, there are some important differences, both philosophic and practice-based. For a professional assessment to be valid, the individuality of the occupation, and associated practice patterns, must be represented in both the content and skill domains. As a result, any differences between osteopathic and allopathic medicine must be taken into account in the test development process. Osteopathic medicine is based on the concept that the normal body, when in physiologic homeostasis, is capable of making its own remedies against disease. The physician's role in this system is to enhance the patient's capabilities, and all interventions (including osteopathic diagnosis and osteopathic manipulative treatment) are focused on this activity. An enhanced emphasis on health promotion, disease prevention, the neuromusculoskeletal system in health and disease, and effective physician-patient relationships are fundamental to the osteopathic philosophy of health care. As a result, the medical knowledge of osteopathic physicians may be somewhat different than that of allopathic physicians,10 with patient encounters often involving a unique set of conditions and treatment modalities (eg, osteopathic manipulative treatment). Therefore, the evaluation and assessment of the clinical skills of osteopathic physicians, as opposed to their allopathic counterparts, will require additional performance measures, different patient scenarios, and tailored scoring rubrics.
The National Board of Osteopathic Medical Examiners administers the Comprehensive Osteopathic Medical Licensing Examination (COMLEX–USA). This set of paper-and-pencil examinations is the standard for testing osteopathic trainees11 and generally focuses on assessing the medical knowledge and clinical reasoning of osteopathic students and graduates. The direct assessment of clinical skills is not currently included in the examination sequence leading to a license to practice medicine. To address this concern, the National Board of Osteopathic Medical Examiners is in the process of developing and validating a performance-based clinical skills examination that uses standardized patients.12 This examination, known as COMLEX–USA–PE, is being designed to evaluate the clinical skills of osteopathic medical school graduates who wish to enter graduate medical education training programs. However, given the differences in philosophy and training between practitioners who use the diagnostic and therapeutic measures of allopathic medicine and those who also incorporate manipulative measures and other features of osteopathic philosophy, the nature and focus of a clinical skills assessment targeted at osteopathic physicians will necessarily be different than those for one developed for allopathic physicians.
Osteopathic manipulative treatment (OMT) is used, often in conjunction with other forms of standard medical care, to manage patient problems.13 Although there has been some concern in the osteopathic medical profession regarding decreased used of OMT,14 there is research substantiating the efficacy of OMT for specific patient conditions, especially those involving the musculoskeletal system.15 Unfortunately, while OMT is a key identifiable feature of osteopathic medicine and numerous studies have been conducted to establish its utility,16,17 there has been little research aimed at evaluating the ability of medical students or practicing physicians to perform OMT correctly.
Traditionally, medical students use OMT on volunteer patients or their peers and are assessed accordingly. This process, albeit of some value for formative feedback, generally lacks standardization and has limited fidelity. Potential differences in patient/subject conditions combined with variable, often subjective, assessment criteria render this strategy of limited use for summative assessments, especially where the stakes may be high.
The practice of osteopathic medicine requires competence in a number of domains, including biomedical knowledge, diagnostic reasoning, and clinical skills. Understanding disease processes and medical problem-solving are important facets of being a physician. Likewise, clinical skills, including history taking, physical examination, and physician-patient communication, are essential talents in patient care. However, unlike allopathic medicine, osteopathic medicine includes OMT. Moreover, even in the well-developed area of clinical skills assessment, little has been done to develop and validate techniques for measuring the ability of physicians to perform various OMT techniques. The purpose of this investigation was to explore the use of OMT assessment in the context of a multistation standardized patient examination. Specific simulated patient encounters were developed and fourth-year osteopathic medical students were required to perform OMT as appropriate. A detailed measurement instrument was developed to assess both methods and techniques.

Methods

Assessment (COMLEX–USA–PE Prototype)

The COMLEX–USA–PE prototype is a 12-station clinical skills examination designed to assess osteopathic medical students. Standardized patients are used for the assessment of clinical skills. The following skills are measured via COMLEX–USA–PE: medical history taking, physical examination, written communication and clinical problem-solving (SOAP note), and physician-patient communication and relationship (global patient assessment). Unlike most other clinical skills assessments, there is an osteopathic emphasis, including evaluation of OMT.
Candidates are given 13 minutes to interview and evaluate the patient in each encounter. A 7-minute postencounter exercise follows each patient interview. Candidates interview standardized patients with a variety of complaints or reasons for visiting the physician. Cases are generally based on high-prevalence osteopathic-specific reasons for visit but are specifically designed to lead to a number of diagnostic outcomes. As part of the assessment, candidates are told to do focused physical examinations where appropriate. They are not instructed to perform particular procedures (eg, take a blood pressure). Likewise, for OMT, the candidates are not directly prompted to perform a particular maneuver. Instead, the standardized patients are trained to elicit some form of treatment through conversations about their medical histories. When prompted, candidates are required to select an appropriate treatment modality. For example, a standardized patient may say that, in the past, OMT was effective for treating his or her back pain.
During the 1-hour pre-examination orientation, candidates are instructed that, in addition to pelvic and breast examinations, high-velocity low-amplitude (HVLA) thrust maneuvers are not allowed. Although it would likely be difficult to recruit standardized patients who were willing to be treated repeatedly with HVLA maneuvers, these techniques were not specifically excluded because of safety concerns. Instead, because there is generally at least 1 week before these maneuvers would be repeated, the valid assessment of a student's HVLA skills in a single examination session would be questionable. Other modalities can be repeated safely and comfortably on individual standardized patients throughout the course of the testing.

Students

A total of 121 fourth-year medical students were tested at Western University of Health Sciences College of Osteopathic Medicine of the Pacific in Pomona, Calif. Within this sample, there were two students from Touro University College of Osteopathic Medicine in Vallejo, Calif.

Assessment Form

A 12-case, content-balanced form of COMLEX–USA–PE was administered. The list of cases and associated patient problems is provided in Table 1. All students completed the same 12 cases. However, depending on the examination session, students did not encounter the same set of standardized patients. Patient interviews and treatment (where applicable) were limited to 13 minutes. Students were given 7 minutes to complete their written summaries of the patient encounter (SOAP note). Three cases were specifically developed to include the possibility of OMT.
View this table:
Table 1
Pilot Study Case Mix

Osteopathic Physician Raters

Sixteen osteopathic physician raters were used in this study. Each rater provided scores for at least one test session (maximum of 12 students). The minimum number of students assessed by a given osteopathic physician rater was 11 (1 test session). The maximum number of students assessed by any given rater was 51 (5 test sessions).
The specialties of the osteopathic physician examiners were as follows: family medicine (7), internal medicine (7, including 2 gastroenterologists, 1 women's health specialist, 1 geriatrician), pediatrics (1), and general surgery (1). Full-time osteopathic manipulative medicine faculty members were excluded from this particular study. The students were primarily recruited from the study site, and the osteopathic physician examiners were from the immediate region. Because osteopathic physician examiners were being used to rate OMT skills, we believed that the presence of recognized osteopathic manipulative medicine faculty members would cue the examinees that they were to be evaluated for OMT at those stations. The more natural cue that was incorporated was the prompt from the standardized patients within the clinical interaction. All osteopathic physician examiners were trained in the use of the assessment scales and were board-certified osteopathic physicians with at least 3 years of clinical practice experience.

Training

Standardized Patients—Standardized patients were trained to portray a patient accurately and consistently, document candidate performance on the appropriate clinical skills checklists, and complete the global patient assessment of physician-patient communication skills. Two standardized patients were trained for each case (24 standardized patients, 12 cases). To ensure accuracy and consistency during the examination, standardized patients playing the same case were trained together for at least 8 hours by the same trainer. Enhanced standardized patient training notes, including indexed checklist items, training videotapes, and benchmark videotapes, were used for instruction.

Osteopathic Physician Examiners

Osteopathic physician examiners were allowed to assess OMT techniques after completion of 4 hours of formal training using videotapes, CD-ROM technology, and hands-on demonstrations. During the training sessions, osteopathic physician examiners were familiarized with the purpose of the examination, the assessment protocols, and the content and structure of the rating tool. Additionally, on each examination day, the examiners participated in a 1-hour orientation session.

Scoring

Medical history taking was measured in each station. Case-specific checklists, completed by the standardized patient after each encounter, were used for scoring. These checklists consist of the relevant patient history questions that should be asked given the nature of the case and the chief patient complaint. A student's medical history taking score is the percentage of items attained. Physical examination skills were measured in 11 (of 12) of the stations. Case-specific checklists, completed by standardized patients following the encounters, were used for scoring. These checklist items reflect the maneuvers that a student should complete in doing a focused physical examination. A student's physical examination score is the percentage of items attained. The data gathering score for a given station is the percentage of the total history taking and physical examination items attained. Summary scores are obtained by averaging the skill scores over encounters.
Even though osteopathic diagnosis and treatment could be included with any of the cases, as part of this study, OMT was specifically assessed in three (25%) of the stations. Evaluations were done by an osteopathic physician in the examination room using the recently developed OMT assessment tool. This instrument has 15 items that can each be scored from 2 (done proficiently) to 0 (done incorrectly or not done). A score of 1 is given for actions that are done, but with hesitation, uncertainty, tentativeness, not performed optimally, etc. A candidate's total score for a given case can range from 0 to 30 on the raw score metric or 0 to 100 on the percent score metric. For stations in which OMT is assessed, the osteopathic clinical skills score is a combination (average) of the data gathering and OMT (converted to a percentage) scores. For stations in which OMT is not assessed, the osteopathic clinical skills score is the data gathering score.
Use of the SOAP (subjective, objective, assessment, plan) format to document findings from the patient encounter is common. Physicians document what the patient told them (chief complaint, history of present illness, past medical history), what they saw in the examination (significant positive and negative physical findings), the assessment (problem list, diagnoses), and the plan (treatment, further diagnostic tests). For the current investigation, the notes were scored for each category (S, O, A, and P) and globally by trained osteopathic physicians raters. Each note was scored for the subjective, objective, assessment, and plan portions on a 1 to 9 scale, with 1 to 3 being unacceptable and 7 to 9 being superior. Ratings of 4 to 6 were not labeled, but could be considered to represent performance that was better than unacceptable, yet less than superior. The SOAP mean score (range, 1 to 9) is the average of the four category ratings.
The biomedical/biomechanical domain encompasses osteopathic clinical skills and written communication (SOAP). The biomedical/biomechanical score is based on a weighted average of the osteopathic clinical skills (2/3 weight) and SOAP note (1/3 weight) scores. Values, on a percent score metric, can range from 0 to 100.
The humanistic domain includes physician-patient communication and physician-patient relationship skills. The standardized patients in each station evaluated these skills. The standardized patients use the global patient assessment instrument to rate the candidates across six relevant dimensions (clarity of questions, listening, explanation and summarization of information, respectfulness, empathy, and professionalism). Each dimension is rated on a scale of 1 to 9, where 1 to 3 denotes unacceptable performance and 7 to 9 signifies superior performance. The global patient assessment score for a given station is the mean of the six dimension scores.

Analysis

Several analyses of student performances were conducted. To quantify overall student ability, mean scores, by case, were calculated. Correlations between OMT scores and other COMLEX–USA–PE elements were calculated in an attempt to provide convergent and discriminant evidence for the validity of the assessment. Item analysis techniques, including principal components analysis, were used to determine the use and redundancy of particular OMT evaluation criteria.

Results

Descriptive Statistics

Osteopathic manipulative treatment was assessed in 3 of the 12 encounters. Mean student performance for cases B, G, and I, on a percent-score metric, were 88.3 (SD, 14.0), 85.6 (SD, 18.7), and 70.2 (SD, 36.1), respectively (Table 2). Overall, students did well on this part of the assessment. However, for cases B and I, the minimum student score was 0, which suggests that some students did not perform OMT. In 18 (of 363) encounters, the student did not get credit on any item of the OMT scale (0 for all items).
View this table:
Table 2
Osteopathic Manipulative Treatment Mean Scores (%) by Case
Correlations between OMT scores and other COMLEX–USA–PE components were also calculated. These correlations are based on aggregate student scores for each skill area over the 12 standardized patient encounters. Osteopathic manipulative treatment scores were most highly correlated with biomedical/biomechanical composite scores* (r = 0.47) and global patient assessment scores (r = 0.46). The correlations between OMT and history taking and physical examination were r = 0.19 and r = 0.10, respectively.

Osteopathic Physician Scoring

Sixteen osteopathic physician examiners were used in this study. There was some variability in osteopathic physician scoring for each case. For example, the mean score for osteopathic physician 8 (case I) was 48.2 (n = 11 students). In contrast, the mean score for osteopathic physician 6 (n = 11 students), for the same case, was 88.5. It should be noted, however, that the osteopathic physician raters did not see the same students, except in different cases. Therefore, some variability in osteopathic physician ratings for a given case would be expected as the result of ability differences in the students being evaluated. Nevertheless, if students were assigned to COMLEX–USA–PE sessions at random, one would not expect differences in mean scores of this magnitude between osteopathic physician raters for identical cases.
Variance components and interstation reliability estimates (generalizability, dependability) were calculated for the OMT ratings (Table 3). The reasonably large case variance component indicates that the difficulty of the OMT task varied across the three encounters. The non-zero osteopathic physician:case (osteopathic physician rater nested in case) variance component indicates that, for specific cases, the osteopathic physicians' ratings varied in average stringency.
View this table:
Table 3
Variance Components (Osteopathic Manipulative Treatment Scores)

OMT Item Analysis

The OMT assessment instrument contains 15 items. The mean item scores (0 to 2 scale) and item-total correlations (discrimination, D) by case are presented in Table 4. These item-total correlations indicate how well scores for each facet of OMT are able to distinguish between low- and high-ability students. Ideally, these values should be high, indicating that performance on this item is in agreement, at least in terms of rank-ordering performances, with what could be produced by a total score. With the exception of item 14 (reassessment after treatment), students scored well on all items. The reliabilities (internal consistency) of the OMT item scores for cases B, G, and I were 0.83, 0.90, and 0.97, respectively.
View this table:
Table 4
Osteopathic Manipulative Treatment Item Difficulties and Discriminations by Case

Variable Reduction

A principal components analysis was conducted to ascertain whether there was some redundancy in the variables (items) chosen for the OMT scale. The goal was to reduce the number of observed variables into a smaller number of principal components (artificial variables) that will account for most of the variance in the observed variables. For this analysis, the particular cases, or osteopathic physician raters, were not accounted for.
Based on the principal components analysis, the first two components account for approximately 74% of the total variance. The OMT items and corresponding component loadings are presented in Table 5. The results of the principal components analysis suggest that items 13 and 14, and perhaps item 2, form one construct and the remaining items form another.
View this table:
Table 5
Rotated Factor Pattern from Principal Components Analysis of OMT Instrument

Interrater Reliability

As part of the study, approximately 15% of the osteopathic physician OMT evaluations were viewed in real-time via a monitor in the control room. Independent OMT assessments were obtained for these viewed encounters. A total of 63 osteopathic physician observations were made (case B, n = 22; case G, n = 21; case I, n = 20). The correlation between osteopathic physician and osteopathic physician observer sum scores for all cases was r = 0.83. The correlations by case ranged from r = 0.06 (case B) to r = 0.93 (case G). The correlation between ratings for case I was r = 0.90.
The mean absolute discrepancy between osteopathic physician and osteopathic physician observer scores, over the three cases, was 2.4. That is, based on the 0 to 30 scale, the average difference between the two raters, either positive or negative, was 2.4 points. These average values ranged from 2.9 (case B) to 1.9 (case G).

Discussion

The assessment of OMT will be a significant part of the evaluation of the clinical skills of osteopathic physicians. Therefore, it is important that efforts be made to develop and validate instruments that can be used to provide fair and equitable evaluations of OMT. Given the utility and flexibility of standardized patient methodologies, it is apropos that simulated patient encounters be used as a method for testing OMT. However, due to the limited number of encounters that can be used in an assessment and the potential impact of various sources of measurement error (eg, rater stringency, variability in standardized patient portrayal), it is imperative that assessment models be tested and revised where necessary. The present investigation provides much-needed data that can be used to augment the psychometric adequacy of OMT assessment.
The OMT evaluation instrument and associated testing protocol performed adequately in this study. The data analysis indicated that the task was reasonably easy. This may have been due to examinee queuing (only OMT cases had an osteopathic physician observer in the room) or student preparation activities (eg, practicing maneuvers before the assessment). While the internal consistency of the OMT item scores by case was high, the reliability of the OMT scores over encounters (n = 3) was not sufficient to make this a separately reportable COMLEX–USA–PE component. As with other performance-based assessments, task sampling variability can be large, limiting the consistency of scores from one encounter to the next.18 The generalizability analysis, combined with the interrater comparisons, suggests that additional efforts be made to standardize the osteopathic physician rater training protocols and more clearly define the criteria for crediting specific maneuvers or actions. Although some variability in student scores could be expected as a function of the osteopathic physicians who assessed them, it is essential that rater effects be minimized. This could be accomplished through augmentations to the training protocol, multiple osteopathic physician evaluations of individual performances, statistical adjustment for rater effects,19 or some associated combinations.
The OMT instrument, though easy to use, could possibly be enhanced for future studies. First, the inclusion of a middle scoring option may not be necessary for some items. For example, although there may be some gradation in the “physician position/posture comfortable” element (item 4), it could probably be validly scored as either “yes” or “no.” For other items where partial credit may be apropos, a more detailed definition of the scoring criteria, especially with respect to suboptimal performance, would be beneficial. Second, it may be advisable to let standardized patients evaluate interpersonal skills,20 eliminating associated items from the OMT tool (eg, checks for patient comfort, maintains communication/eye contact with patient). We found high discrepancy rates between osteopathic physician raters and osteopathic physician observers for item 2 (checks for patient comfort). This would suggest that osteopathic physician raters use different criteria for scoring this element of OMT. It may also be difficult for an observer to see if a medical student is checking for patient comfort, unless the student states something to the standardized patient. Standardized patients, who are evaluating other aspects of the clinical interaction, may be better suited to evaluate this aspect after receiving training. Third, the revision of items that are difficult for the osteopathic physician to assess (eg, reassessment after treatment, physician position/posture comfortable) may be warranted. Item 14 (reassessment after treatment) generally had the lowest discrimination values and, based on the principal components analysis, did not measure the same domain as most of the remaining items.
Although reassessment is important to determine whether treatment was successful and may tap the mindset of the person performing the maneuver, there may be some difficulty observing and assessing this facet of OMT.*Furthermore, given the time constraints of the patient encounter, failure to do a follow-up assessment may not be wholly attributable to potential OMT ability deficits. These issues need to be investigated further. Finally, given the wide array of OMT techniques, the subjective nature of the present assessment criteria, and the potential problems of checklist scoring, it would be useful to investigate whether a holistic or global approach to scoring could be used for the evaluation.21 For this, item 15 (“appears practiced and competent in the technique”), with additional scoring anchors, could be used.
There are some modifications to the assessment protocol that may be apropos. The interrater reliability data suggest that, with the exception of case B, reasonably equivalent scores can be obtained via real-time monitor-based scoring. Therefore, taking the osteopathic physician raters out of the room and substituting scoring done via monitors or videotape should be possible and will add to the verisimilitude of the assessment. This would eliminate the possibility of students being prompted to do OMT or any other element because there was an additional person with a clipboard in the room. Although the decreased use of OMT in practice has been noted,14 it was still surprising that some students did not perform OMT, even though they were aware that it was part of the assessment. This may have been due to candidate discomfort with choosing one of several techniques, skill deficits, or perceived lack of time. In addition, given the simulated nature of the encounter, it is often difficult to use palpatory diagnosis as the basis for initiating a treatment program involving manipulation. Here, it may be possible to use standardized patients with actual physical findings. Given the fundamental role of OMT in osteopathic medical training and the generally satisfactory performance of most students who attempted treatment, the specific reasons for not treating the patient will need to be explored in future studies.
Performance on the OMT assessment, combined with the associations of OMT scores with other clinical skills scores, offers evidence to support the validity of using standardized patient–based encounters to measure OMT. In general, based on the patient complaint, the students were able to choose an appropriate therapeutic technique and perform it adequately.
Given that all students were in their fourth year of osteopathic medical training, this provides support for the chosen content domain of the assessment. The OMT score was most highly correlated with the standardized patient global patient assessment scores, indicating some overlap in measurement domains. This was not surprising, given that some of the current OMT assessment items keyed on patient comfort (“checks for patient comfort”) and physician communication (“instructs patient in clear/concise manner”). Likewise, it would be expected that patients who are treated indifferently or perceive confidence deficits in their health care provider would provide lower overall ratings of interpersonal abilities. The low correlations of OMT scores with data gathering, physical examination, and written communication were expected, given the relative uniqueness of the OMT measurement domain. Additional OMT studies focusing on the student response processes (eg, choice of manipulative technique), internal examination structure (eg, investigating performance differences for select student groups), and relationships to other variables (eg, performance measures with “real” patients) are all warranted.

Conclusion

Although measuring a student's proficiency in OMT techniques can be difficult, it is necessary within the context of evaluating osteopathic clinical skills. The use of standardized patient assessments affords a realistic milieu in which to evaluate physician-patient interactions, including therapeutic techniques. While the COMLEX–USA–PE evaluation could be improved, both in terms of the score scale and the rating protocol, the results of this investigation support its use in assessment of osteopathic medical students.

Footnotes

  • * Osteopathic manipulative treatment scores are part of the biomedical/biomechanical score, disattenuating this association.
  • * Unless the student specifically states that he or she is reassessing the patient, it may be difficult for the osteopathic physician rater to score this item.

References

  1.    
  2.    
  3.    
  4.  
  5.    
  6.  
  7.  
  8.    
  9.    
  10.    
  11.  
  12.    
  13.  
  14.    
  15.    
  16.  
  17.    
  18.    
  19.    

The Journal of the American Osteopathic Association

0 comentarios :

Publicar un comentario