-
Notifications
You must be signed in to change notification settings - Fork 0
/
cj-practical.qmd
116 lines (95 loc) · 3.18 KB
/
cj-practical.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
# Comparative Judgement Practical
Create a new Comparative Judgement task on [www.nomoremarking.com](www.nomoremarking.com) and upload 8 pdfs with the numbers 1 to 8. Judge the numbers 80 times and download the decisions file.
## Fitting the Bradley-Terry Model
```{r}
#| label: setup
#| include: true
#| warning: false
#| echo: true
#| output: false
rm(list=ls())
library(tidyverse)
library(ggplot2)
library(sirt)
require(nmmBtm)
decisions <- read_csv('data/decisions-1.csv')
persons <- read_csv('data/persons-1.csv')
glimpse(decisions)
glimpse(persons)
```
```{r}
#| label: btm
#| include: true
#| warning: false
#| echo: true
#| output: false
# get the person ids and their first names
persons <- persons %>%
select(Code, `First Name`) %>%
rename(id = Code, name = `First Name`)
# go through the decisions and add a column with the name of the person who was chosen or not chosen in each case
decisions <- decisions %>% left_join(persons, by = c('chosen' = 'id')) %>%
rename(chosenName = name) %>%
left_join(persons, by = c('notChosen' = 'id')) %>%
rename(notChosenName = name)
# mutate chosenName and notChosenName to be factors
decisions <- decisions %>%
mutate(chosenName = factor(chosenName), notChosenName = factor(notChosenName))
# create a name for each judge
decisions <- decisions %>% mutate(
judge = paste0('judge_', judgeName)
)
# get a vector of judges
judges <- decisions %>% pull(judge)
# format for the btm model
decisions <- decisions %>%
select(chosenName, notChosenName) %>%
mutate(result = 1)
# the package doesn't like tibbles?
decisions <- as.data.frame(decisions)
mod1 <- btm(decisions, judge=judges, fix.eta = 0, ignore.ties = TRUE, eps = 0.3)
```
## Explore the model results
See <https://alexanderrobitzsch.github.io/sirt/reference/btm.html>
### Reliability
### Person parameters
```{r}
#| label: reliability
#| include: true
# The mle reliability, also known as separation reliability
print(mod1$mle.rel)
# The person parameters
knitr::kable(mod1$effects, digits = 2)
# The judge fit
# Fit statistics (outfit and infit) for judges if judge is provided. In addition, average agreement of the rating with the mode of the ratings is calculated for each judge (at least three ratings per dyad has to be available for computing the agreement)
knitr::kable(mod1$fit_judges, digits = 2)
```
## Visualise the person parameters (model effects) along with the standard error
```{r}
#| label: person-parameters
# plot the model effects with the standard errors
# add a column to the person parameters with the upper and lower limits using 1.96 * standard error
effects <- mod1$effects %>%
mutate(
lower = theta - 1.96 * se.theta,
upper = theta + 1.96 * se.theta
)
p <- ggplot(effects, aes(x = individual, y = theta)) +
geom_point() +
geom_errorbar(aes(ymin = lower, ymax = upper), width = 0.2) +
theme_bw() +
labs(x = 'Person parameter', y = 'Person')
print(p)
```
## Inspect the model probabilities
## What is p1? p0? pD?
```{r}
#| include: true
#| warning: false
#| echo: true
#| output: true
head(mod1$probs)
```
## Extension exercises
* Try some less accurate judging. Note the impace on the model parameters.
* Can you recreate the model probabilities?