DREEM Relates Every Entity's Motion¶
Welcome to the documentation for DREEM – an open-source tool for multiple object tracking. DREEM is a framework that enables you to train your own models, run inference on new data, and evaluate your results. DREEM supports a variety of detection types, including keypoints, bounding boxes, and segmentation masks. You can use any detection model you want, convert the output to a format DREEM can use, and train a model or run inference using a pretrained model.
Key Features¶
- Command-Line & API Access: Use DREEM via a simple CLI or integrate into your own Python scripts.
- Configurable Workflows: Easily customize training and inference using YAML configuration files.
- Pretrained Models: Get started quickly with models trained specially for microscopy and animal domains.
- Visualization: Tracking outputs are directly compatible with SLEAP's GUI.
- Examples: Step-by-step notebooks and guides for common workflows.
Installation¶
Head over to the Installation Guide to get started.
Quickstart¶
Ready to try DREEM? Follow the Quickstart Guide to:
- Download example datasets and pretrained models
- Run tracking on sample videos
- Visualize your results
Example Workflows¶
Explore the Examples section for notebooks that walk you through the DREEM pipeline. We have an end-to-end demo that includes model training, as well as a microscopy example that shows how to use DREEM with an off-the-shelf detection model.
Documentation Structure¶
- Installation
- Quickstart
- Usage Guide
- Examples
- API Reference
Get Help¶
- Questions? Open an issue on GitHub.
- Contributions: We welcome contributions! See our Contributing Guide for details (link to be added).