scvi.core.modules.TOTALVAE

class scvi.core.modules.TOTALVAE(n_input_genes, n_input_proteins, n_batch=0, n_labels=0, n_hidden=256, n_latent=20, n_layers_encoder=1, n_layers_decoder=1, dropout_rate_decoder=0.2, dropout_rate_encoder=0.2, gene_dispersion='gene', protein_dispersion='protein', log_variational=True, gene_likelihood='nb', latent_distribution='ln', protein_batch_mask=None, encoder_batch=True)[source]

Total variational inference for CITE-seq data.

Implements the totalVI model of [GayosoSteier20].

Parameters
n_input_genes : intint

Number of input genes

n_input_proteins : intint

Number of input proteins

n_batch : intint (default: 0)

Number of batches

n_labels : intint (default: 0)

Number of labels

n_hidden : intint (default: 256)

Number of nodes per hidden layer for encoder and decoder

n_latent : intint (default: 20)

Dimensionality of the latent space

n_layers

Number of hidden layers used for encoder and decoder NNs

dropout_rate

Dropout rate for neural networks

genes_dispersion

One of the following

  • 'gene' - genes_dispersion parameter of NB is constant per gene across cells

  • 'gene-batch' - genes_dispersion can differ between different batches

  • 'gene-label' - genes_dispersion can differ between different labels

protein_dispersion : strstr (default: 'protein')

One of the following

  • 'protein' - protein_dispersion parameter is constant per protein across cells

  • 'protein-batch' - protein_dispersion can differ between different batches NOT TESTED

  • 'protein-label' - protein_dispersion can differ between different labels NOT TESTED

log_variational : boolbool (default: True)

Log(data+1) prior to encoding for numerical stability. Not normalization.

gene_likelihood : strstr (default: 'nb')

One of

  • 'nb' - Negative binomial distribution

  • 'zinb' - Zero-inflated negative binomial distribution

latent_distribution : strstr (default: 'ln')

One of

  • 'normal' - Isotropic normal

  • 'ln' - Logistic normal with normal params N(0, 1)

Attributes

T_destination

dump_patches

Methods

add_module(name, module)

Adds a child module to the current module.

apply(fn)

Applies fn recursively to every submodule (as returned by .children()) as well as self.

bfloat16()

Casts all floating point parameters and buffers to bfloat16 datatype.

buffers([recurse])

Returns an iterator over module buffers.

children()

Returns an iterator over immediate children modules.

cpu()

Moves all model parameters and buffers to the CPU.

cuda([device])

Moves all model parameters and buffers to the GPU.

double()

Casts all floating point parameters and buffers to double datatype.

eval()

Sets the module in evaluation mode.

extra_repr()

Set the extra representation of the module

float()

Casts all floating point parameters and buffers to float datatype.

forward(x, y, local_l_mean_gene, …[, …])

Returns the reconstruction loss and the Kullback divergences.

get_reconstruction_loss(x, y, px_dict, py_dict)

Compute reconstruction loss.

get_sample_dispersion(x, y[, batch_index, …])

Returns the tensors of dispersions for genes and proteins.

half()

Casts all floating point parameters and buffers to half datatype.

inference(x, y[, batch_index, label, …])

Internal helper function to compute necessary inference quantities.

load_state_dict(state_dict[, strict])

Copies parameters and buffers from state_dict into this module and its descendants.

modules()

Returns an iterator over all modules in the network.

named_buffers([prefix, recurse])

Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.

named_children()

Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.

named_modules([memo, prefix])

Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.

named_parameters([prefix, recurse])

Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.

parameters([recurse])

Returns an iterator over module parameters.

register_backward_hook(hook)

Registers a backward hook on the module.

register_buffer(name, tensor[, persistent])

Adds a buffer to the module.

register_forward_hook(hook)

Registers a forward hook on the module.

register_forward_pre_hook(hook)

Registers a forward pre-hook on the module.

register_parameter(name, param)

Adds a parameter to the module.

requires_grad_([requires_grad])

Change if autograd should record operations on parameters in this module.

sample_from_posterior_l(x, y[, batch_index, …])

Provides the tensor of library size from the posterior.

sample_from_posterior_z(x, y[, batch_index, …])

Access the tensor of latent values from the posterior.

share_memory()

rtype

~T~T

state_dict([destination, prefix, keep_vars])

Returns a dictionary containing a whole state of the module.

to(*args, **kwargs)

Moves and/or casts the parameters and buffers.

train([mode])

Sets the module in training mode.

type(dst_type)

Casts all parameters and buffers to dst_type.

zero_grad()

Sets gradients of all model parameters to zero.