There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.
Abstract
Background
In the literature under review there are about 300 reported cases of vaginal leiomyomas
with none from Cameroon. We report a case of vaginal leiomyoma and highlight the diagnostic
challenges faced at the Douala Referral Hospital (DRH), Cameroon.
Case presentation
A 36-year-old G3P3002 sexually active Cameroonian married woman reported dysuria,
dyspareunia, cessation of sexual intercourse and offensive smelling vaginal discharge
for 6 months and a 3-year history of a vaginal tumour; she was misdiagnosed despite
ultrasonography and magnetic resonance imaging (MRI) but was corrected by an experienced
radiologist. She underwent first look laparoscopy, surgical excision of the tumour
through the vagina and histopathology analysis that confirmed leiomyoma.
Conclusion
Posterior location of vaginal leiomyomas found in this case is a rare occurrence.
The diagnosis is based on careful examination and preoperative imaging (ultrasonography
and MRI). However, the definitive diagnosis is usually made intra-operatively. We
combined laparoscopic exploration of the internal genital organs and per vaginal excision
of the vaginal leiomyoma. Thus, we recommend frozen section biopsy to exclude leiomyosarcoma.
Uterine fibroids are a major cause of morbidity in women of a reproductive age (and sometimes even after menopause). There are several factors that are attributed to underlie the development and incidence of these common tumors, but this further corroborates their relatively unknown etiology. The most likely presentation of fibroids is by their effect on the woman’s menstrual cycle or pelvic pressure symptoms. Leiomyosarcoma is a very rare entity that should be suspected in postmenopausal women with fibroid growth (and no concurrent hormone replacement therapy). The gold standard diagnostic modality for uterine fibroids appears to be gray-scale ultrasonography, with magnetic resonance imaging being a close second option in complex clinical circumstances. The management of uterine fibroids can be approached medically, surgically, and even by minimal access techniques. The recent introduction of selective progesterone receptor modulators (SPRMs) and aromatase inhibitors has added more armamentarium to the medical options of treatment. Uterine artery embolization (UAE) has now been well-recognized as a uterine-sparing (fertility-preserving) method of treating fibroids. More recently, the introduction of ultrasound waves (MRgFUS) or radiofrequency (VizAblate™ and Acessa™) for uterine fibroid ablation has added to the options of minimal access treatment. More definite surgery in the form of myomectomy or hysterectomy can be performed via the minimal access or open route methods. Our article seeks to review the already established information on uterine fibroids with added emphasis on contemporary knowledge.
I would give great praise to the physician whose mistakes are small, for perfect accuracy is seldom to be seen. Hippocrates, On Ancient Medicine, IX (tr. By Francis Adams) Introduction “All men are liable to error; and most men are, in many points, by passion or interest, under temptation to it”. Locke, John, An Essay concerning Human Understanding (1690), bk. 4, ch. 20, sect. 17. In all branches of medicine, there is an inevitable element of patient exposure to problems arising from human error, and this is increasingly the subject of bad publicity, often skewed towards an assumption that perfection is achievable, and that any error or discrepancy represents a wrong that must be punished1. Radiology involves decision-making under conditions of uncertainty2, and therefore cannot always produce infallible interpretations or reports. The interpretation of a radiologic study is not a binary process; the “answer” is not always normal or abnormal, cancer or not. The final report issued by a radiologist is influenced by many variables, not least among them the information available at the time of reporting. In some circumstances, radiologists are asked specific questions (in requests for studies) which they endeavour to answer; in many cases, no obvious specific question arises from the provided clinical details (e.g. “chest pain”, “abdominal pain”), and the reporting radiologist must strive to interpret what may be the concerns of the referring doctor. (A friend of one of the authors, while a resident in a North American radiology department, observed a staff radiologist dictate a chest x-ray reporting stating “No evidence of leprosy”. When subsequently confronted by an irate respiratory physician asking for an explanation of the seemingly-perverse report, he explained that he had no idea what the clinical concerns were, as the clinical details section of the request form had been left blank). Notwithstanding these complexities, the public frequently expects that a medical investigation will produce “the correct answer”, all the time. This unfortunate over-simplification of a multi-factorial process is often informed by representations on TV dramas, media reports describing every discrepancy or dispute over interpretation as a scandal, and the political imperative to divert anger over perceived failings on to others, preferably easy targets, often portrayed and perceived as privileged. Amid many possibilities of error, it would be strange indeed to be always in the right. Peter Mere Latham (1789-1875), General remarks on the Practice of Medicine, The Heart and its Affections Ch. IV With respect to radiological investigations, the use of the term “error” is often unsuitable; it is more appropriate to concentrate on “discrepancies” between a report and a retrospective review of a film or outcome1. Professional body guidelines recommend that all imaging procedures should include an expert opinion from a radiologist, given by means of a written report or comment3. “Opinion” may be defined as “a conclusion arrived at after some weighing of evidence, but open to debate or suggestion”, and thus an expert’s opinion should not be expected to be incontrovertible4. Error implies a mistake (an incorrect interpretation of an imaging study, in this context). In order for a report to be erroneous, it follows that a correct report must also be possible. Because of the subjectivity of image interpretation, the definition of error depends on “expert opinion”. An observer makes an error if he or she fails to reach the same conclusion that would be reached by a group of expert observers. Errors can only arise in cases where the correct interpretation is not in dispute. Somewhere between the clear-cut error and the inevitable difference of opinion in interpretation is an arbitrary division defining the limit of professional acceptability4. Errors in judgement must occur in the practice of an art which consists largely in balancing probabilities. Sir William Osler (1849-1919), Aequanimitas, with Other Addresses, Teacher and Student. Unlike physical examination of patients, or findings at surgery or endoscopy, evidence of a radiologic examination remains available for subsequent scrutiny, and can be used for study of observer variation. A 20-year literature review in 2001 suggested the level of error for clinically significant or major error in radiology is in the range 2-20% and varies depending on the radiological investigation5. The issue of error in radiology has been recognised for many years. Studies in the 1940s found that CXRs of patients with suspected tuberculosis were read differently by different observers in 10-20% of cases. In the 1970s, it was found that 71% of lung cancers detected on screening radiographs were visible in retrospect on previous films4,6. The “average” observer has been found to miss 30% of visible lesions on barium enemas4. A 1999 study found that 19% of lung cancers presenting as a nodular lesion on chest x-rays were missed7. Another study identified major disagreement between 2 observers in interpreting x-rays of patients in an emergency department in 5-9% of cases, with an estimated incidence of errors per observer of 3-6%8. A 1997 study using experienced radiologists reporting a collection of normal and abnormal x-rays found an overall 23% error rate when no clinical information was supplied, falling to 20% when clinical details were available9. A recent report suggests a significant major discrepancy rate (13%) between specialist neuroradiology second opinion and primary general radiology opinion10. A recent review found a “real-time” error rate among radiologists in their day-to-day practices averages 3-5%, but also quoted previous research showing that in patients subsequently diagnosed with lung or breast cancer with previous “normal” relevant radiologic studies, retrospective review of the chest radiographs (in the case of lung cancer) or mammogram (in breast cancer cases) identified the lung cancer in as many as 90% and the breast cancer in as many as 75% of cases11. Prolonged attention to a specific area on a radiograph (“visual dwell”) increases both false negative and false positive errors. Reducing the viewing time for CXRs to less than 4 seconds also increases the miss rate4. Comparative studies of other medical non-radiologic fields have found a similar prevalence of inaccuracy in clinical assessment and examination. A Mayo Clinic study of autopsies published in 2000, which compared clinical diagnoses with post-mortem diagnoses, found that in 26% of cases, a major diagnosis was missed clinically11. Common experience in radiology suggests that many errors are of little or no significance to the patient, and some significant errors remain undiscovered. Errors are inevitable, and the concept of necessary fallibility must be accepted. Equally a threshold of competency is required of all professionals involved in the delivery of radiology services. IMPACT OF VOLUME AND COMPLEXITY The volume and complexity of information being provided to radiologists for reporting has increased enormously in recent years. Given the complexity of newer imaging modalities, particularly CT and MR, it is now commonplace for the interpretation of clinical images to take longer than the process of acquiring them4. Workload can be a factor in increasing the likelihood of errors in radiology reporting2. A variety of studies have shown that most abnormal findings on plain radiographs are found during the first few seconds of searching the image, with the number of true-positive findings decreasing abruptly after a short time. However, a radiologist interpreting a radiograph in a few seconds is gambling that a large proportion of the radiograph shows normal findings12. In at least one instance, a radiologist in the United States has been sued for punitive damages in a medical malpractice lawsuit arising from a case of breast cancer missed on a mammogram, because “the defendant radiologist read too many x-ray examinations on the day in question, demonstrating a wanton disregard of patient well-being by sacrificing quality patient care for volume in order to maximise revenue”12. The case was settled out of court without a formal finding. Furthermore, a recent study of radiologists’ visual accommodation and performance showed that the ability to focus and detect fractures diminished at the end of the work-day13. Longer work-days can only exacerbate this decline in performance, and therefore safety. This is in nobody’s best interests. NEGLIGENCE Perfection, n. An imaginary state or quality distinguished from the actual by an element known as excellence, an attribute of the critic. (Bierce, Ambrose. The Devil’s Dictionary). The legal basis for negligence involves a breach of the standard of care, which is usually defined as being the use of the same degree of knowledge, skill and ability as an ordinary careful physician would exercise under similar circumstances. Many legal judgements in the US and other jurisdictions have clearly established that doctors cannot be required to be perfect, and cannot be expected to guarantee a good result to patients. Negligence occurs not when there is merely an error, but when the degree of error exceeds an acceptable norm11. The courts occasionally treat false negative errors as if they were errors of negligence. It is frequently alleged after retrospective review that lesions should have been noted prospectively. However, application of theories of perceptual thresholds shows that it makes sense that more lesions will be perceived retrospectively [14]. An appellate court in Wisconsin gave a ruling in 1998 that said: “radiologists simply cannot detect all abnormalities on all x-rays….Errors in perception by radiologists viewing x-rays occur in the absence of negligence”. Hindsight bias is the tendency for people with knowledge of the actual outcome of an event to believe falsely that they would have predicted the outcome. Hindsight bias is an extremely compelling influence; people try to make sense of what they know has happened rather than analyzing the available data independently. The exact mechanism by which hindsight bias influences judgement called “creeping determinism” - a process in which outcome information is immediately and automatically integrated into a person’s knowledge about the events preceding the outcome. Hindsight bias is not supposed to influence the determination of medical negligence, but it ensures that some reasonably-acting defendants will be unfairly subjected to adverse liability judgements when after-injury evaluation has taken place15. Another source of fallacy is the vicious circle of illusions which consists on the one hand of believing what we see, and on the other of seeing what we believe. Sir Clifford Allbutt (1836-1925). It has been suggested that, in malpractice cases relating to radiology, judges should instruct juries that “there is an absolutely unavoidable ‘human factor’ at work in the review of films; some abnormalities may be missed, even the obvious ones; the mere fact that a radiologist misses an abnormality on a radiograph does not mean that he or she has committed malpractice; and not all radiographic misses are excusable. Therefore, the focus of attention should be on issues such as proof of competence, habits of practice, and use of proper techniques”16. Err, v.i. To believe or act in a way contrary to my beliefs and actions (Bierce, Ambrose. The Devil’s Dictionary). GENERIC FACTORS CONTRIBUTING TO UNDERPERFORMANCE/DISCREPANCIES/ERRORS 1. Radiologist specific causes of error Renfrew reviewed 182 cases presented at a problem case conference between August 1986 and Oct 1990. Causes of error identified were subsequently classified: a. Complacency – the finding was appreciated but attributed to the wrong cause b. Faulty reasoning – the finding was appreciated and interpreted as abnormal, but attributed to the wrong cause c. Lack of knowledge on the part of the viewer d. Under reading – the finding was identifiable, but was missed e. Poor communication – the lesion was identified and interpreted correctly, but the message failed to reach the relevant clinician f. Miscellaneous – the lesion was not present on the images, even in retrospect. This may be due to limitations of the examination or an inadequate examination g. Complications – most frequently during invasive procedures14. Another individual cause for error is “satisfaction of search”, the phenomenon whereby detection of one abnormality on a radiographic study results in a premature termination of the search, allowing for the possibility of missing other, related or unrelated abnormalities2,14. Perceptual errors continue to constitute the bulk of errors made by radiologists and false negative errors are the most frequently committed perceptual mistakes14. 2. System issues contributing to errors System contributors to discrepancies and errors include the following: a. Staff shortages b. Excess workload – studies have demonstrated degradation of lung cancer detection with decreased viewing time, and increased error rates in abdominal CT reporting when the radiologist reports more than 20 studies per day2. A recent national survey of Consultant Radiologist workload in Ireland has found that, in 2009, the average Irish radiologist was performing 128% of the workload considered appropriate as a benchmark measured in Australia17,18. Increasing numbers and complexity of imaging studies requires a matching increase in radiology manpower. “A motto: Do it tomorrow; you’ve made enough mistakes today”. Powell, Dawn. Entry for 23 August 1956, The Diaries of Dawn Powell 1931-65, ed. T. Page (1995). c. Inexperience of staff d. Inadequate equipment 2 e. Inadequacy of clinical information available to the reporting radiologist – the clinical diagnosis has been shown to change in 50% of cases following communication between clinician and radiologist, with a change of treatment in 60% of cases discussed19. This is one of the many strong arguments against the use of remote teleradiology reporting for radiologic studies. Knowledge of pertinent clinical history has been shown to increase the accuracy of CXR interpretations from 16 to 72% for trainees, and from 38 to 84% for consultant-grade radiologists6. f. Inappropriate expectations of the capability of a particular radiologic technique, which might not be suitable for the question being asked of it. g. Unavailability of previous studies or reports for comparison4. h. Over-reliance on locum radiologists within a department. GENERIC FACTORS MITIGATING UNDERPERFORMANCE/DISCREPANCIES/ERROR While the factors causing and protecting against underperformance and discrepancies/errors are similar, whatever the location or working circumstances, we consider these potentially-mitigating factors from the more-specific standpoint of current structures within The Republic of Ireland. The factors outlined below are at different stages of development/underdevelopment within the Irish Healthcare system and individual radiology departments. Some of the factors are therefore, of necessity, aspirational, and their implementation will require significant planning and resources. a. Availability of trained/accredited Radiologists The evolving role of competence assurance, including continuous professional development, under the auspices of the Irish Medical Council will play a significant role in the validation of skill maintenance. The requirement that all doctors on the Specialist Register of the Irish Medical Council participate in a Professional Competence Scheme (PCS), which became a legal requirement from May 1st., 2011, should eliminate the possibility of radiological services being provided by inappropriately-qualified or -certified doctors. b. Availability of trained and certified Radiographers, Physicists and other staff members within radiology departments. There is no legal provision at present for radiography services being provided by anybody other than appropriately-qualified and registered professionals. However, some departments do experience difficulty in maintaining adequate staff numbers, as a result of many factors, including recruitment moratoria and lack of availability of suitably-trained individuals. c. Implementation of an integrated quality assurance/improvement programme. There are many components to an integrated quality assurance programme, involving all staff members in a radiology department. The Faculty of Radiologists launched a comprehensive programme for quality assurance in radiology practice in September 201020; full implementation of this programme is underway, with plans for all components to be in place by the end of 2012. d. Audit - self-directed, randomised or peer audit. As part of the legally-required Professional Competence Scheme inaugurated in May 2011, all radiologists on the Specialist register must participate in at least one audit per annum. e. Imaging Protocols. Adoption of standard imaging protocols may reduce the likelihood of error or discrepancy in some areas of radiology practice, especially in modalities such as CT and MR. f. Communication Protocols. Many errors in Radiology may be attributed to poor communication at some stage in the imaging/reporting process. Structure and process audits may identify such deficiencies. As part of the Faculty QA programme20, recommendations are made for the adoption of a protocol for communication of urgent or unexpected radiological findings by each department. g. Equipment Maintenance A regular programme of equipment maintenance within a radiology department is an importance element of quality assurance. A rolling capital programme for equipment replacement is also desirable. h. Discrepancy meetings: These are advocated as a learning process, not as a method of competence assessment21. They are also provided for and defined in the quality assurance programme20. i. Double reading: There is ample evidence that double reading improves accuracy. The only area where 100% double reading is the norm in the Republic of Ireland is in the Breast Screening Programme. It has also been used in the United Kingdom for Breast screening and for the outsourced Independent Sector MRI contract, where 10 percent of studies were audited/double read. Double reading is one of the best ways to safeguard the quality of service and the introduction of routine double reading on an agreed percentage (e.g. 2-5%) of work would have a significant impact on the maintenance of quality. There is however a significant manpower issue arising from its adoption. j. Multidisciplinary Conferences Multidisciplinary conferences have become common (indeed, standard), particularly in the context of cancer care. One of the key elements in multidisciplinary conferences is the double reading of images within the context of the appropriate clinical scenario. This is now seen to be an essential component of cancer care. HOW DO WE IDENTIFY AND DEAL WITH UNDERPERFORMANCE? “No one is completely worthless – they can always serve as a bad example”. Anon, And I Quote, ‘Example’, ed. Ashton Applewhite and others (1992). Again, while these proposed mechanisms are generally-applicable, our comments make specific reference to their application in The Republic of Ireland. 1. Means of assessing error Human error can be viewed in either a person-centered or system-centered way, or both. A person-centered approach focuses on the individual who commits the error, and adopts counter-measures aimed at that individual, including disciplinary measures: ‘naming, shaming and blaming’2. The NHS has concluded that the person-centered approach, though attractive from a managerial and legal perspective, is ‘ill-suited to the health care domain’2,22. The system-based approach accepts that humans are fallible and errors inevitable, and seeks to address contributing system causes for these errors. What matters less is who made the error, and more how and why defences failed, and what factors helped create the conditions in which the error occurred2. The concept of Root Cause Analysis has been used as a method to learn from mistakes and reduce hazards in the future. This process is based on the principle of answering three questions: What happened? Why did it happen? What can be done to prevent it happening again?23 As stated in the NHS Chief Medical Officer’s report on this issue : ‘It is of course right, in health care as in any other field, that individuals must sometimes be held to account for their actions – in particular if there is evidence of gross negligence or recklessness, or of criminal behaviour. Yet in the great majority of cases the cause of serious failures stretch far beyond the actions of individuals immediately involved’’22. 2. Allegation of incompetence One of the initial actions should be due consideration of the nature and source of the allegation, and the means by which the allegation is made. The allegation may come from a patient, a relative of a patient, a clinician, management personnel, or a Radiology colleague. Complaints from a referring clinician are particularly significant. Possible approaches would include all or some elements of the following sequence of escalation: 3. Is there a problem? (a) The views of the Clinical Director, Institutional Risk Management Director, Medical Director and Hospital Chief Executive Officer (CEO) may be sought. (b) Evidence of compliance with a Departmental Quality Assurance Programme and the mandatory Professional Competence Scheme should be sought where applicable. (c) Internal audit. The local Clinical Director should undertake or arrange for a review of a random sample of cases. The radiologist involved should be informed that an audit is being undertaken. (d) Should it be considered that there is a problem requiring further investigation or action, the advice of an ad-hoc group comprising representatives of The Faculty of Radiologists, RCSI, and relevant parties from among the Health Service Executive (HSE), the Department of Health & Children (DoH&C) and the Health Information & Quality Authority (HIQA) should be sought with respect to escalating the review. 4. External Review If there is persistent concern after an internal audit, an external review may be performed. This review should be initiated through an established mechanisms (e.g. the Forum of Irish Postgraduate Medical training Bodies). If the internal audit has uncovered significant system issues contributing to the perceived problem, this should not only concern the involved Radiologist, but should probably also involve other departmental Radiologists, with their consent. This would allow an internal control for varying departmental factors and also conform to a systems-based approach. Again, a random sample of cases should be used. There should be at least three radiologists conducting the audit (Jolly 2001)24. The Radiologists chosen should reflect whether the Radiologist under review is a general radiologist or a sub specialist radiologist, i.e. the same reporting conditions should apply. 5. Medical Council In the United Kingdom if there is a persistent concern after an external review, an evaluation and declaration of competency is made by the National Clinical Assessment Service (NCAS). There is no specific similar body in Ireland, and therefore this function presumably resides with the Medical Council. Any determination made by the Medical Council may have grave consequences for an individual under investigation, and due care must be taken to ensure that the processes used are fair and judicious. 6. “Look Back” Once a problem is confirmed after an external review, a ‘look back’ may be instigated, if necessary, to assess the impact of the problem; this should be targeted (e.g. mammograms only), graduated (e.g. initially over most recent 3-6 months period) and risk-based (e.g. plain films not reviewed by another doctor). This should probably be performed in the public eye as a problem has now been confirmed (as opposed to a suspicion), and there is a duty to inform the public where a problem exists. All patients whose studies are being reviewed should be informed prior to the commencement of the process. In general terms, such “looks back” are very labour- and resource-intensive, and should be avoided where possible, given that they inevitably divert resources away from dealing with active and current patients. 7. Risk Assessment Template This three-part process, based on the Irish Health Service Executive and the UK Health and Safety Executive Risk Assessment Tool25, uses a scoring methodology to assess the impact of a particular discrepancy episode and estimate the likelihood of a wider problem. Although unvalidated, it is one possible means of gauging the scale and nature of any needed intervention. The initial assessment should be carried out by the Clinical Director. The process is outlined in Table 1. Table 1 Risk Assessment Template. STEP 1: Evaluate level of Discrepancy / Error. Score Impact 1 Negligible No ill effects 2 Minor Minimal ill effects 3 Moderate Error resulting in short term ill effects 4 Major Error resulting in long term ill effects 5 Extreme Error resulting in severe long term or fatal ill effects STEP 2: Evaluate proof of competence, habits of practice and use of proper techniques. 2(a): System Related Issues System Factor Score Clinical team working environment 5 Audit 5 Case conferences 5 Appropriate Workload 5 PACS/ Available clinical information 5 Discrepancy Meetings 5 Modern Equipment 5 Trained Radiographic Staff 5 2(b): Professional Factors Professional Factors Score Experienced 8 Working in a radiology team 8 Isolated incident 8 CPD 8 No health/stress issues 8 STEP 3: Apply risk matrix: Risk Matrix (multiplication of level of discrepancy and system/professional factors scores) Level of discrepancy: System/Professional factors score Negligible 1 Minor 2 Moderate 3 Major 4 Extreme 5 5 5 10 15 20 25 4 4 8 12 16 20 3 3 6 9 12 18 2 2 4 6 8 10 1 1 2 3 4 5 APPLICATION OF RISK MATRIX OUTCOME BAND 1 (Matrix score 1-5): Local resolution is desirable. The relevant error should be fed back by the Lead Radiologist to the imaging professional concerned and subsequently discussed and recorded at the departmental discrepancy meeting. Relevant clinicians should be informed. Any remedial actions required can be directed from the discrepancy meeting platform. BAND 2 (Matrix score 6-12): Local resolution is possible. The relevant error should be fed back to the imaging professional concerned and discussed at the departmental discrepancy meeting. Relevant clinicians should be informed. The case can be reviewed by the Lead Radiologist with the input of Institutional Risk Management. Consideration can be given to an internal audit as in 3c above. BAND 3 (Matrix score >/=15): The error should be fed back to the imaging professional concerned and discussed at the departmental discrepancy meeting. Institutional Risk Management and relevant clinicians should be informed.. Consideration should be given to an external review, as in 4 above. CONCLUSION Errors are inevitable, in medicine as in life, and the concept of necessary fallibility must be accepted. Equally a threshold of competency is required of all professionals involved in the delivery of medical services. In this paper, we explore the concepts of error and discrepancy in radiology, discuss some of the factors which may contribute to errors and discrepancies, and outline a graduated approach to the management of perceived or identified errors or discrepancies in radiological practice, which, with appropriate adaptation, may be applicable to similar scenarios in other specialties.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0
International License (
http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided
you give appropriate credit to the original author(s) and the source, provide a link
to the Creative Commons license, and indicate if changes were made. The Creative Commons
Public Domain Dedication waiver (
http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
scite shows how a scientific paper has been cited by providing the context of the citation, a classification describing whether it supports, mentions, or contrasts the cited claim, and a label indicating in which section the citation was made.