Login / Signup

Using a new interrater reliability method to test the modified Oulu Patient Classification instrument in home health care.

Jill FloBjørg LandmarkOve Edward HatlevikLisbeth Fagerström
Published in: Nursing open (2018)
For parallel classifications, consensus varied between 64.78-77.61%. Interrater reliability varied between 0.49-0.69 (Cohen's kappa), the internal consistency between 0.81-0.94 (Cronbach's alpha). Analysis of the raw scores showed 27.2% classifications had the same points, 39.1% differed one point, 17.9% differed two points and 16.5% differed ≥3 points.
Keyphrases
  • healthcare
  • machine learning
  • nuclear factor
  • case report
  • clinical practice
  • immune response
  • social media
  • health information