sleap.io.visuals

Module for generating videos with visual annotation overlays.

class sleap.io.visuals.VideoMarkerThread(in_q: queue.Queue, out_q: queue.Queue, labels: sleap.io.dataset.Labels, video_idx: int, scale: float, show_edges: bool = True, crop_size_xy: Optional[Tuple[int, int]] = None, color_manager: Optional[sleap.gui.color.ColorManager] = None)[source]

Annotate frame images (draw instances).

Parameters
  • in_q – Queue with (list of frame indexes, ndarray of frame images).

  • out_q – Queue to send annotated images as (images, h, w, channels) ndarray.

  • labels – the Labels object from which to get data for annotating.

  • video_idx – index of Video in labels.videos list.

  • scale – scale of image (so we can scale point locations to match)

  • show_edges – whether to draw lines between nodes

  • color_manager – ColorManager object which determine what colors to use for what instance/node/edge

run()[source]

Method representing the thread’s activity.

You may override this method in a subclass. The standard run() method invokes the callable object passed to the object’s constructor as the target argument, if any, with sequential and keyword arguments taken from the args and kwargs arguments, respectively.

sleap.io.visuals.img_to_cv(img: numpy.ndarray) numpy.ndarray[source]

Prepares frame image as needed for opencv.

sleap.io.visuals.reader(out_q: queue.Queue, video: sleap.io.video.Video, frames: List[int], scale: float = 1.0)[source]

Read frame images from video and send them into queue.

Parameters
  • out_q – Queue to send (list of frame indexes, ndarray of frame images) for chunks of video.

  • video – The Video object to read.

  • frames – Full list frame indexes we want to read.

  • scale – Output scale for frame images.

Returns

None.

sleap.io.visuals.resize_image(img: numpy.ndarray, scale: float) numpy.ndarray[source]

Resizes single image with shape (height, width, channels).

sleap.io.visuals.save_labeled_video(filename: str, labels: sleap.io.dataset.Labels, video: sleap.io.video.Video, frames: List[int], fps: int = 15, scale: float = 1.0, crop_size_xy: Optional[Tuple[int, int]] = None, show_edges: bool = True, color_manager: Optional[sleap.gui.color.ColorManager] = None, gui_progress: bool = False)[source]

Function to generate and save video with annotations.

Parameters
  • filename – Output filename.

  • labels – The dataset from which to get data.

  • video – The source Video we want to annotate.

  • frames – List of frames to include in output video.

  • fps – Frames per second for output video.

  • scale – scale of image (so we can scale point locations to match)

  • crop_size_xy – size of crop around instances, or None for full images

  • show_edges – whether to draw lines between nodes

  • color_manager – ColorManager object which determine what colors to use for what instance/node/edge

  • gui_progress – Whether to show Qt GUI progress dialog.

Returns

None.

sleap.io.visuals.writer(in_q: queue.Queue, progress_queue: queue.Queue, filename: str, fps: float)[source]

Write annotated images to video.

Image size is determined by the first image received in queue.

Parameters
  • in_q – Queue with annotated images as (images, h, w, channels) ndarray

  • progress_queue – Queue to send progress as (total frames written: int, elapsed time: float). Send (-1, elapsed time) when done.

  • filename – full path to output video

  • fps – frames per second for output video

Returns

None.