(Higher)HRNet backbone.

This implementation is based on the PyTorch implementation of HRNet, modified to implement HigherHRNet’s configuration and new deconvolution heads.


class sleap.nn.architectures.hrnet.HigherHRNet(C: int = 18, initial_downsampling_steps: int = 1, n_deconv_modules: int = 1, bottleneck: bool = False, deconv_filters: int = 256, bilinear_upsampling: bool = False, stem_filters: int = 64)[source]

HigherHRNet backbone.


The variant of HRNet to use. The most common is HRNet32, which has ~30M params. This number is effectively the number of filters at the highest resolution output.


Number of initial downsampling steps at the stem. Decrease if this introduces too much loss of resolution from the initial images.


Number of upsampling steps to perform at the head. If this is equal to initial_downsampling_steps, the output will be at the same scale as the input.


If True, uses bottleneck blocks instead of simple residual blocks.


Number of filters to use in deconv blocks if using transposed convolutions.


Use bilinear upsampling instead of transposed convolutions at the output heads.

property down_blocks

Returns the number of downsampling steps in the model.

output(x_in, n_output_channels)[source]

Builds the layers for this backbone and return the output tensor.

  • x_in – Input 4D tf.Tensor.

  • n_output_channels – The number of final output channels.


A tf.keras.model whose outputs are a list of tf.Tensors

at each scale of the deconv_modules.

Return type


property output_scale

Returns relative scaling factor of this backbone.


Adds a delimiter if the prefix is not empty.

sleap.nn.architectures.hrnet.bottleneck_block(x_in, filters, expansion_rate=4, name_prefix=None)[source]

Creates a convolutional block with bottleneck.

sleap.nn.architectures.hrnet.simple_block(x_in, filters, stride=1, downsampling_layer=None, name_prefix=None)[source]

Creates a basic residual convolutional block.