scvi.core.modules.JVAE

class scvi.core.modules.JVAE(dim_input_list, total_genes, indices_mappings, gene_likelihoods, model_library_bools, n_latent=10, n_layers_encoder_individual=1, n_layers_encoder_shared=1, dim_hidden_encoder=64, n_layers_decoder_individual=0, n_layers_decoder_shared=0, dim_hidden_decoder_individual=64, dim_hidden_decoder_shared=64, dropout_rate_encoder=0.2, dropout_rate_decoder=0.2, n_batch=0, n_labels=0, dispersion='gene-batch', log_variational=True)[source]

Joint variational auto-encoder for imputing missing genes in spatial data.

Implementation of gimVI [Lopez19].

Parameters
dim_input_list : List[int]List[int]

List of number of input genes for each dataset. If

the datasets have different sizes, the dataloader will loop on the smallest until it reaches the size of the longest one

total_genes : intint

Total number of different genes

indices_mappings : List[Union[ndarray, slice]]List[Union[ndarray, slice]]

list of mapping the model inputs to the model output Eg: [[0,2], [0,1,3,2]] means the first dataset has 2 genes that will be reconstructed at location [0,2] the second dataset has 4 genes that will be reconstructed at [0,1,3,2]

gene_likelihoods : List[str]List[str]

list of distributions to use in the generative process ‘zinb’, ‘nb’, ‘poisson’

bool list : model_library_bools

model or not library size with a latent variable or use observed values

n_latent : intint (default: 10)

dimension of latent space

n_layers_encoder_individual : intint (default: 1)

number of individual layers in the encoder

n_layers_encoder_shared : intint (default: 1)

number of shared layers in the encoder

dim_hidden_encoder : intint (default: 64)

dimension of the hidden layers in the encoder

n_layers_decoder_individual : intint (default: 0)

number of layers that are conditionally batchnormed in the encoder

n_layers_decoder_shared : intint (default: 0)

number of shared layers in the decoder

dim_hidden_decoder_individual : intint (default: 64)

dimension of the individual hidden layers in the decoder

dim_hidden_decoder_shared : intint (default: 64)

dimension of the shared hidden layers in the decoder

dropout_rate_encoder : floatfloat (default: 0.2)

dropout encoder

dropout_rate_decoder : floatfloat (default: 0.2)

dropout decoder

n_batch : intint (default: 0)

total number of batches

n_labels : intint (default: 0)

total number of labels

dispersion : strstr (default: 'gene-batch')

See vae.py

log_variational : boolbool (default: True)

Log(data+1) prior to encoding for numerical stability. Not normalization.

Attributes

T_destination

dump_patches

Methods

add_module(name, module)

Adds a child module to the current module.

apply(fn)

Applies fn recursively to every submodule (as returned by .children()) as well as self.

bfloat16()

Casts all floating point parameters and buffers to bfloat16 datatype.

buffers([recurse])

Returns an iterator over module buffers.

children()

Returns an iterator over immediate children modules.

cpu()

Moves all model parameters and buffers to the CPU.

cuda([device])

Moves all model parameters and buffers to the GPU.

decode(z, mode, library[, batch_index, y])

rtype

Tuple[Tensor, Tensor, Tensor, Tensor]Tuple[Tensor, Tensor, Tensor, Tensor]

double()

Casts all floating point parameters and buffers to double datatype.

encode(x, mode)

rtype

Tuple[Tensor, Tensor, Tensor, Optional[Tensor], Optional[Tensor], Tensor]Tuple[Tensor, Tensor, Tensor, Optional[Tensor], Optional[Tensor], Tensor]

eval()

Sets the module in evaluation mode.

extra_repr()

Set the extra representation of the module

float()

Casts all floating point parameters and buffers to float datatype.

forward(x, local_l_mean, local_l_var[, …])

Return the reconstruction loss and the Kullback divergences.

get_sample_rate(x, batch_index, *_, **__)

half()

Casts all floating point parameters and buffers to half datatype.

load_state_dict(state_dict[, strict])

Copies parameters and buffers from state_dict into this module and its descendants.

modules()

Returns an iterator over all modules in the network.

named_buffers([prefix, recurse])

Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.

named_children()

Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.

named_modules([memo, prefix])

Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.

named_parameters([prefix, recurse])

Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.

parameters([recurse])

Returns an iterator over module parameters.

reconstruction_loss(x, px_rate, px_r, …)

rtype

TensorTensor

register_backward_hook(hook)

Registers a backward hook on the module.

register_buffer(name, tensor[, persistent])

Adds a buffer to the module.

register_forward_hook(hook)

Registers a forward hook on the module.

register_forward_pre_hook(hook)

Registers a forward pre-hook on the module.

register_parameter(name, param)

Adds a parameter to the module.

requires_grad_([requires_grad])

Change if autograd should record operations on parameters in this module.

sample_from_posterior_l(x[, mode, deterministic])

Sample the tensor of library sizes from the posterior.

sample_from_posterior_z(x[, mode, deterministic])

Sample tensor of latent values from the posterior.

sample_rate(x, mode, batch_index[, y, …])

Returns the tensor of scaled frequencies of expression.

sample_scale(x, mode, batch_index[, y, …])

Return the tensor of predicted frequencies of expression.

share_memory()

rtype

~T~T

state_dict([destination, prefix, keep_vars])

Returns a dictionary containing a whole state of the module.

to(*args, **kwargs)

Moves and/or casts the parameters and buffers.

train([mode])

Sets the module in training mode.

type(dst_type)

Casts all parameters and buffers to dst_type.

zero_grad()

Sets gradients of all model parameters to zero.