Let \(\theta\) denote the parameter under study and \(f(\theta)\) the prior distribution of \(\theta\). Let \(x=(x_1,...,x_n)\in \mathcal{X}\) denote the data to be observed in the new study, where \(n\) is the sample size. The pre-posterior marginal distribution of the data, is:
\[
\begin{equation}
f(x)=\int_\Theta f(x|\theta)f(\theta)d\theta~~~~~~~~~~~~~~~~(1)
\end{equation}
\] and the posterior distribution of \(\theta\) given x is: \[
\begin{equation}
f(\theta|x)=\frac{f(x|\theta)f(\theta)}{\int_\Theta f(x|\theta)f(\theta)d\theta}~~~~~~~~~~~~(2)
\end{equation}
\] In the Fully Bayesian approach, \(f(\theta)\) is an informative prior distribution in both (1) and (2). In the Mixed Bayesian-Likelihood approach \(f(\theta)\) is informative in (1) but non-informative, e.g. uniform, in (2).