Update README.md
Browse files
README.md
CHANGED
|
@@ -8,8 +8,83 @@ tags:
|
|
| 8 |
Copyright 2021-2023 by Mackenzie Mathis, Alexander Mathis, Shaokai Ye and contributors. All rights reserved.
|
| 9 |
|
| 10 |
|
| 11 |
-
- Please cite Ye et al if you use this model in your work https://arxiv.org/abs/2203.07436v1
|
| 12 |
- If this license is not suitable for your business or project
|
| 13 |
please contact EPFL-TTO (https://tto.epfl.ch/) for a full commercial license.
|
| 14 |
|
| 15 |
-
This software may not be used to harm any animal deliberately
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
Copyright 2021-2023 by Mackenzie Mathis, Alexander Mathis, Shaokai Ye and contributors. All rights reserved.
|
| 9 |
|
| 10 |
|
| 11 |
+
- Please cite **Ye et al 2023** if you use this model in your work https://arxiv.org/abs/2203.07436v1
|
| 12 |
- If this license is not suitable for your business or project
|
| 13 |
please contact EPFL-TTO (https://tto.epfl.ch/) for a full commercial license.
|
| 14 |
|
| 15 |
+
This software may not be used to harm any animal deliberately!
|
| 16 |
+
|
| 17 |
+
|
| 18 |
+
**MODEL CARD:**
|
| 19 |
+
|
| 20 |
+
This model was trained a dataset called "TopViewMouse-5K." It was trained in Tensorflow 2 within the [DeepLabCut framework](www.deeplabcut.org).
|
| 21 |
+
Full training details can be found in Ye et al. 2023.
|
| 22 |
+
You can use this model simply with our light-weight loading package called [DLCLibrary](https://github.com/DeepLabCut/DLClibrary). Here is an example useage:
|
| 23 |
+
|
| 24 |
+
```python
|
| 25 |
+
from pathlib import Path
|
| 26 |
+
from dlclibrary import download_huggingface_model
|
| 27 |
+
|
| 28 |
+
# Creates a folder and downloads the model to it
|
| 29 |
+
model_dir = Path("./superanimal_topviewmouse_model")
|
| 30 |
+
model_dir.mkdir()
|
| 31 |
+
download_huggingface_model("superanimal_topviewmouse", model_dir)
|
| 32 |
+
```
|
| 33 |
+
|
| 34 |
+
**Training Data:**
|
| 35 |
+
|
| 36 |
+
It consists of being trained together on the following datasets:
|
| 37 |
+
|
| 38 |
+
- **3CSI, BM, EPM, LDB, OFT** See full details at (1) and in (2).
|
| 39 |
+
|
| 40 |
+
- **BlackMice** See full details at (3).
|
| 41 |
+
|
| 42 |
+
- **WhiteMice** Courtesy of Prof. Sam Golden and Nastacia Goodwin. See details in SIMBA (4). TriMouse See full details
|
| 43 |
+
at (5).
|
| 44 |
+
|
| 45 |
+
- **DLC-Openfield** See full details at (6).
|
| 46 |
+
|
| 47 |
+
- **Kiehn-Lab-Openfield, Swimming, and treadmill** Courtesy of Prof. Ole
|
| 48 |
+
Kiehn, Dr. Jared Cregg, and Prof. Carmelo Bellardita; see details at (7).
|
| 49 |
+
|
| 50 |
+
- **MausHaus** We collected video data from five
|
| 51 |
+
single-housed C57BL/6J male and female mice in an extended home cage, carried out in the laboratory of Mackenzie Mathis
|
| 52 |
+
at Harvard University and also EPFL (temperature of housing was 20-25C, humidity 20-50%). Data were recorded at 30Hz
|
| 53 |
+
with 640 × 480 pixels resolution acquired with White Matter, LLC eV cameras. Annotators localized 26 keypoints across 322
|
| 54 |
+
frames sampled from within DeepLabCut using the k-means clustering approach (8). All experimental procedures for mice
|
| 55 |
+
were in accordance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals and approved by
|
| 56 |
+
the Harvard Institutional Animal Care and Use Committee (IACUC) (n=1 mouse), and by the Veterinary Office of the Canton
|
| 57 |
+
of Geneva (Switzerland; license GE01) (n=4 mice).
|
| 58 |
+
|
| 59 |
+
Here is an image with examples from the datasets, the distribution of images per dataset, and the keypoint guide.
|
| 60 |
+
|
| 61 |
+
Please note that each dataest was labeled by separate labs, seperate individuals, therefore while we map names
|
| 62 |
+
to a unified pose vocabulary, there will be annotator bias in keypoint placement (See Ye et al. 2023 for our Supplementary Note on annotator bias).
|
| 63 |
+
You will also note the dataset is primarily using C56Blk6/J mice and only some CD1 examples.
|
| 64 |
+
We recommend if performance is not as good as you need it to be, first try video adaptation (see Ye et al. 2023),
|
| 65 |
+
or fine-tune these weights with your own labeling.
|
| 66 |
+
|
| 67 |
+
<p align="center">
|
| 68 |
+
<img src="https://images.squarespace-cdn.com/content/v1/57f6d51c9f74566f55ecf271/1690986892069-I1DP3EQU14DSP5WB6FSI/modelcard-TVM.png?format=1500w" width="95%">
|
| 69 |
+
</p>
|
| 70 |
+
|
| 71 |
+
|
| 72 |
+
1. Oliver Sturman, Lukas von Ziegler, Christa Schläppi, Furkan Akyol, Mattia Privitera, Daria Slominski, Christina Grimm, Laetitia Thieren, Valerio
|
| 73 |
+
Zerbi, Benjamin Grewe, et al. Deep learning-based behavioral analysis reaches human accuracy and is capable of outperforming commercial
|
| 74 |
+
solutions. Neuropsychopharmacology, 45(11):1942–1952, 2020.
|
| 75 |
+
2. Lukas von Ziegler, Oliver Sturman, and Johannes Bohacek. Videos for deeplabcut, noldus ethovision X14 and TSE multi conditioning systems
|
| 76 |
+
comparisons. https://doi.org/10.5281/zenodo.3608658. Zenodo, January 2020.
|
| 77 |
+
3. Isaac Chang. Trained DeepLabCut model for tracking mouse in open field arena with topdown view. https://doi.org/10.5281/zenodo.3955216.
|
| 78 |
+
Zenodo, July 2020.
|
| 79 |
+
4. Simon RO Nilsson, Nastacia L. Goodwin, Jia Jie Choong, Sophia Hwang, Hayden R Wright, Zane C Norville, Xiaoyu Tong, Dayu Lin, Bran-
|
| 80 |
+
don S. Bentzley, Neir Eshel, Ryan J McLaughlin, and Sam A. Golden. Simple behavioral analysis (simba) – an open source toolkit for computer
|
| 81 |
+
classification of complex social behaviors in experimental animals. bioRxiv, 2020.
|
| 82 |
+
5. Jessy Lauer, Mu Zhou, Shaokai Ye, William Menegas, Steffen Schneider, Tanmay Nath, Mohammed Mostafizur Rahman, Valentina Di Santo,
|
| 83 |
+
Daniel Soberanes, Guoping Feng, Venkatesh N. Murthy, George Lauder, Catherine Dulac, Mackenzie W. Mathis, and Alexander Mathis. Multi-
|
| 84 |
+
animal pose estimation, identification and tracking with deeplabcut. Nature Methods, 19:496 – 504, 2022.
|
| 85 |
+
6. Alexander Mathis, Pranav Mamidanna, Kevin M Cury, Taiga Abe, Venkatesh N Murthy, Mackenzie Weygandt Mathis, and Matthias Bethge. Deeplab-
|
| 86 |
+
cut: markerless pose estimation of user-defined body parts with deep learning. Nature neuroscience, 21:1281–1289, 2018.
|
| 87 |
+
7. Jared M. Cregg, Roberto Leiras, Alexia Montalant, Paulina Wanken, Ian R. Wickersham, and Ole Kiehn. Brainstem neurons that command
|
| 88 |
+
mammalian locomotor asymmetries. Nature neuroscience, 23:730 – 740, 2020
|
| 89 |
+
|
| 90 |
+
|