Use LEFT and RIGHT arrow keys to navigate between flashcards;
Use UP and DOWN arrow keys to flip the card;
H to show hint;
A reads text to speech;
44 Cards in this Set
- Front
- Back
Evidence Based Practice EBP
|
Current,
high-quality research evidence is integrated with practitioner expertise and client preferences and values into the process of making clinical decisions |
|
EBP includes
|
Recognize and integrate the needs of those served -- along with best current research evidence and their clinical expertise in making clinical decisions;
• acquire and maintain the knowledge and skills necessary to provide high quality professional services, • evaluate prevention, screening, and diagnostic procedures, protocols, and measures to identify maximally informative and cost-effective diagnostic and screening tools, using recognized appraisal criteria described in the evidence-based practice literature; • evaluate the efficacy, effectiveness, and efficiency of clinical protocols for prevention, treatment, and enhancement using criteria recognized in the evidence-based practice literature; • evaluate the quality of evidence appearing in any source or format, including journal articles, textbooks, continuing education offerings, newsletters, advertising, and Web-based products, prior to incorporating such evidence into clinical decision making; and • monitor and incorporate new and high quality research evidence having implications for clinical practice |
|
Steps to EBP
|
Form clinical question
Use internal evidence Find best current external evidence Critical appraisal Decide whether evidence is strong enough Integrate external evidence w/ "intangibles" Update |
|
Internal Evidence
|
Intangibles
Like, your clinical experience |
|
External evidence
|
Explicit criteria to judge evidence quality
Relevance (to clinical question) Level of evidence based on design and quality Direction, strength, and consistency of the observed outcomes |
|
Levels of External Evidence
Ia |
Systematic meta-analysis
(well-designed, randomized controlled study) |
|
Levels of External Evidence
Ib |
Well-conducted single randomized controlled trial w/narrow confidence interval
|
|
Levels of External Evidence
IIa |
Systematic review of non-randomized quasi-experimental trial or single subject that documents consistent study outcomes
|
|
Levels of External Evidence
IIIb |
High quality quasi-experimental trial or a lower quality RCT or single subject experiment with consistent outcomes, across replications
|
|
Levels of External Evidence
Bottom 3 |
III - Observational studies w/controls (retro, interrupted, case-control, cohort w/controls)
IV - Observational studies w/o controls V Expert opinions |
|
Experimental design
|
patients enrolled prospectively, randomized to group/condition, and some variable is controlled
|
|
Quasi-experimental design
|
prospective but not randomized or controlled
|
|
Non-experimental design
|
patients are identified retrospectively, no manipulation of variables
|
|
Randomized controlled trial (RCT)
|
Designed BEFORE patients enrolled
Patients are assigned at RANDOM to one group to be compared |
|
RCT - pros
|
If randomization is accomplished correctly, best odds that groups do not differ systematically on some variable (other than one of interest)
|
|
RCT - cons
|
Costly, time-consuming, in some cases may raise ethical issues, potential for volunteer bias
|
|
Quasi-experimental: prospective w/o randomization
|
E.g., Cohort study: patients w/ and w/o variable are id'd and followed forward in time to compare their outcomes
PROS - ability to study low-incidence and/or late-emerging problems efficiently CONS - exposure may be linked to a hidden confounder; blinding is difficult |
|
Non-experimental studies 1 of 2
|
Cross-sectional/correlation: group is observed at single point; exposure and outcome are determined simultaneously
PROS - cheap, ethically safe CONS - causality cannot be established, susceptible to recall bias, confounders may not be equally distributed |
|
Non-experimental studies 2 of 2
|
Case-control
|
|
Weakest non-experimental designs
|
Case series: no control group
Case report: N = 1 |
|
Purposes of Assessment
|
Identify skills person has, and does not have in particular area of communication
Guide intervention design Monitor communicative growth and performance over time To qualify a person for special services |
|
Screening - why?
|
Answer yes/no question
To identify children at risk |
|
Language Assessment - what to assess
|
Receptive: comprehension vocabulary, morphosyntax
Production: expressive vocabulary and morphosyntax Domains of language |
|
Collateral areas to assess
|
Hearing
Speech-motor assessment Non-verbal cognition Social functioning |
|
Validity
|
Extent to which a test MEASURES WHAT IT purports to measure
|
|
Construct validity
|
Measures theoretical construct designed to measure
|
|
Face validity
|
Common-sense match between intended purpose and actual content
|
|
Content validity
|
Items should be representative of content domain sampled
Evaluated by experts |
|
Criterion-related validity
|
Concurrent: test agrees with other valid instruments in categorizing children as normal or not
Predictive: how well child will perform on other valid test |
|
Reliability
|
Consistency of measurement in regard to measuring particular skill, behavior
Good reliability: correlation r - .90 |
|
Test-retest reliability
|
Test given at 2 different times, same/stable scoring
|
|
Inter-Rater reliability
|
Given by 2 different examiners, stable scoring
|
|
Internal consistency reliability
|
Subtests of the test rank subject similarly
Parts of the test are measuring something similar to what the whole test is measuring |
|
Split-half reliability
|
Consistency among halves of the test
|
|
Equivalent forms reliability
|
If test has 2 fours, there is consistency among forms
|
|
Diagnostic accuracy
|
How accurately the test classifies children
Goals - Good accuracy 90% or better |
|
Sensitivity
|
Proportion of child WITH DISORDER accurately id'd by test
|
|
Specificity
|
Proportion of child WITHOUT DISORDER accurately id'd by test
|
|
Test bias - "Technical" bias
|
Equating performance with ability (may lead to misclassification and underestimation)
Predictive validity is base don correlations between group means, not individual scores |
|
Test bias: def'n
|
Invalidity or systematic error in how a test measures for members of a PARTICULAR GROUP
e.g., slow stopwatch used for one group |
|
Adverse impact of biased and non-biased tests
|
2ndary effect of biased or non-biased tests
Group differences in test performance that result in disproportionality in selection |
|
Criterion referenced assessment
|
Used to determine an individual's level of achievement or skill in a particular area of communication
|
|
Performance based assessment
|
Describes an individual's skills or behaviors with authentic contexts of use (like home, workplace)
Rationale: Communicative abilities are highly influenced by context - thus, traditional models are of limited use in PLANNING treatments |
|
Dynamic assessment
|
Analyzes how much and what types of support or assistance needed to bring the individual's communicative performance to higher level
|