This module provides a generalized implementation of the LEAP CNN.
LeapCNN class docstring for more information.
- class sleap.nn.architectures.leap.LeapCNN(stacks: int = 1, filters: int = 64, filters_rate: float = 2, down_blocks: int = 3, down_convs_per_block: int = 3, up_blocks: int = 3, up_interpolate: bool = False, up_convs_per_block: int = 2)#
LEAP CNN from “Fast animal pose estimation using deep neural networks” (2019).
This is a simple encoder-decoder style architecture without skip connections.
Using the defaults will create a network with ~10.8M parameters.
Base number of filters in the first encoder block. More filters will increase the representational capacity of the network at the cost of memory and runtime.
Factor to increase the number of filters by in each block.
Number of blocks with pooling in the encoder. More down blocks will increase the effective maximum receptive field, but may incur loss of spatial precision.
Number of convolutions in each encoder block. More convolutions per block will increase the representational capacity of the network at the cost of memory and runtime.
Number of blocks with upsampling in the decoder. If this is equal to
down_blocks, the output of this network will be at the same stride (scale) as the input.
If True, use bilinear interpolation instead of transposed convolutions for upsampling. Interpolation is faster but transposed convolutions may be able to learn richer or more complex upsampling to recover details from higher scales. If using transposed convolutions, the number of filters are determined by
filters_rateto progressively decrease the number of filters at each step.
- property decoder_stack: List[sleap.nn.architectures.encoder_decoder.SimpleUpsamplingBlock]#
Return the decoder block configuration.
- property encoder_stack: List[sleap.nn.architectures.encoder_decoder.SimpleConvBlock]#
Return the encoder block configuration.