Closed
Description
Hello,
This is a partially theoretical question, I hope it is not too wrong to post it as an issue. These resources:
- A simple introduction to Markov Chain Monte–Carlo sampling
- A Conceptual Introduction to Hamiltonian Monte Carlo
definitively helped a lot in understanding the inner workings of some of Turing.jl's features, but I'd also like to ask here to be sure.
Suppose one has to calibrate a model ( e.g. a DifferentialEquations.jl model) using Turing.
- Does the adoption of the likelihood
data ~ MvNormal(predicted, σ)
whereσ
is aFloat64
imply that the points indata
are assumed to be independent? - If one knows that the data are somehow correlated and strictly positive, would substituting the
MvNormal(predicted, σ)
with something like aTruncatedMvNormal(predicted, Σ)
( whereΣ
is now a matrix) be the best option? If so, what prior would you set onΣ
( or on its entries)? - Do the samplers ( NUTS in particular) or ADVI somewhere assume the independence of the model's parameters being sampled?
- Specifically regarding ADVI, did I correctly understand that this example shows how to relax the supposed independence assumption of question 3. ?
Thanks in advance
Metadata
Metadata
Assignees
Labels
No labels