Other¶

flip
([p])¶ Draws a sample from
Bernoulli({p: p})
.p
defaults to0.5
when omitted.

uniformDraw
(arr)¶ Draws a sample from the uniform distribution over elements of array
arr
.

display
(val)¶ Prints a representation of the value
val
to the console.

expectation
(dist[, fn])¶ Computes the expectation of a function
fn
under the distribution given bydist
. The distributiondist
must have finite support.fn
defaults to the identity function when omitted.expectation(Categorical({ps: [.2, .8], vs: [0, 1]})); // => 0.8

marginalize
(dist, project)¶ Marginalizes out certain variables in a distribution.
project
can be either a function or a string. Using it as a function:var dist = Infer({model: function() { var a = flip(0.9); var b = flip(); var c = flip(); return {a: a, b: b, c: c}; }}); marginalize(dist, function(x) { return x.a; }) // => Marginal with p(true) = 0.9, p(false) = 0.1
Using it as a string:
marginalize(dist, 'a') // => Marginal with p(true) = 0.9, p(false) = 0.1

forward
(model)¶ Evaluates function of zero arguments
model
, ignoring any conditioning.Also see: Forward Sampling

forwardGuide
(model)¶ Evaluates function of zero arguments
model
, ignoring any conditioning, and sampling from the guide at each random choice.Also see: Forward Sampling

mapObject
(fn, obj)¶ Returns the object obtained by mapping the function
fn
over the values of the objectobj
. Each application offn
has a property name as its first argument and the corresponding value as its second argument.var pair = function(x, y) { return [x, y]; }; mapObject(pair, {a: 1, b: 2}); // => {a: ['a', 1], b: ['b', 2]}

extend
(obj1, obj2, ...)¶ Creates a new object and assigns own enumerable stringkeyed properties of source objects 1, 2, … to it. Source objects are applied from left to right. Subsequent sources overwrite property assignments of previous sources.
var x = { a: 1, b: 2 }; var y = { b: 3, c: 4 }; extend(x, y); // => { a: 1, b: 3, c: 4 }

cache
(fn, maxSize)¶ Returns a memoized version of
fn
. The memoized function is backed by a cache that is shared across all executions/possible worlds.cache
is provided as a means of avoiding the repeated computation of a deterministic function. The use ofcache
with a stochastic function is unlikely to be appropriate. For stochastic memoization seemem()
.When
maxSize
is specified the memoized function is backed by a LRU cache of sizemaxSize
. The cache has unbounded size whenmaxSize
is omitted.cache
can be used to memoize mutually recursive functions, though for technical reasons it must currently be called asdp.cache
for this to work.cache
does not support caching functions of scalar/tensor arguments when performing inference with gradient based algorithms. (e.g. HMC, ELBO.) Attempting to do so will produce an error.

mem
(fn)¶ Returns a memoized version of
fn
. The memoized function is backed by a cache that is local to the current execution.Internally, the memoized function compares its arguments by first serializing them with
JSON.stringify
. This means that memoizing a higherorder function will not work as expected, as all functions serialize to the same string.

error
(msg)¶ Halts execution of the program and prints
msg
to the console.

kde
(marginal[, kernelWidth])¶ Constructs a
KDE()
distribution from a sample based marginal distribution.

AIS
(model[, options])¶ Returns an estimate of the log of the normalization constant of
model
. This is not an unbiased estimator, rather it is a stochastic lower bound. [grosse16]The sequence of intermediate distributions used by AIS is obtained by scaling the contribution to the overall score made by the
factor
statements inmodel
.When a model includes hard factors (e.g.
factor(Infinity)
,condition(bool)
) this approach does not produce an estimate of the expected quantity. Hence, to avoid confusion, an error is generated byAIS
if a hard factor is encountered in the model.The length of the sequence of distributions is given by the
steps
option. At stepk
the score given by eachfactor
is scaled byk / steps
.The MCMC transition operator used is based on the MH kernel.
The following options are supported:

steps
The length of the sequence of intermediate distributions.
Default:
20

samples
The number of times the AIS procedure is repeated.
AIS
returns the average of the log of the estimates produced by the individual runs.Default:
1
Example usage:
AIS(model, {samples: 100, steps: 100})

Bibliography
[grosse16]  Grosse, Roger B., Siddharth Ancha, and Daniel M. Roy. “Measuring the reliability of MCMC inference with bidirectional Monte Carlo.” Advances in Neural Information Processing Systems. 2016. 