File:An-open-source-toolbox-for-automated-phenotyping-of-mice-in-behavioral-tasks-Movie1.ogv
From Wikimedia Commons, the free media repository
Jump to navigation
Jump to search
Size of this JPG preview of this OGG file: 800 × 450 pixels. Other resolutions: 320 × 180 pixels | 640 × 360 pixels | 1,280 × 720 pixels.
Original file (Ogg Theora video file, length 32 s, 1,280 × 720 pixels, 390 kbps, file size: 1.51 MB)
File information
Structured data
Captions
Summary
[edit]DescriptionAn-open-source-toolbox-for-automated-phenotyping-of-mice-in-behavioral-tasks-Movie1.ogv |
English: Scoring of object interaction. Overview of spatial object recognition scoring module, also applicable for other “interaction” tasks. Typically, a video recording contains multiple enclosed boxes and each box may contain variable number of objects in a particular spatial configuration. Using an initialization GUI, users define the number of boxes and the number of objects per box for an experiment and interactively define regions of interest. Once several videos are initialized, scoring is done as a batch job. An example of real-time tracking of mice is illustrated. Top panel: The centroid and head of the animal are automatically detected in each frame and marked in green and red dots. A vector in the direction of the animal's gaze is marked in red (vector magnitude increased for visual clarity). Interaction is scored when the gaze vector crosses a user-defined region of interest (glass and metal objects). The boundary of interacting object becomes highlighted in red during the movie. Bottom panel: The cumulative time spent in the arena during each bout of interaction. Movie is sped-up x3. Once videos are scored, users can quickly scroll through a set of frames labeled as interacting and verify the accuracy of the algorithm or remove any false-positives if needed. |
||
Date | |||
Source | Video S1 from Patel T, Gullotti D, Hernandez P, O'Brien W, Capehart B, Morrison B, Bass C, Eberwine J, Abel T, Meaney D (2014). "An open-source toolbox for automated phenotyping of mice in behavioral tasks". Frontiers in Behavioral Neuroscience. DOI:10.3389/fnbeh.2014.00349. PMID 25339878. PMC: 4189437. | ||
Author | Patel T, Gullotti D, Hernandez P, O'Brien W, Capehart B, Morrison B, Bass C, Eberwine J, Abel T, Meaney D | ||
Permission (Reusing this file) |
This file is licensed under the Creative Commons Attribution 4.0 International license.
|
||
Provenance InfoField |
|
File history
Click on a date/time to view the file as it appeared at that time.
Date/Time | Thumbnail | Dimensions | User | Comment | |
---|---|---|---|---|---|
current | 13:52, 24 October 2014 | 32 s, 1,280 × 720 (1.51 MB) | Open Access Media Importer Bot (talk | contribs) | Automatically uploaded media file from Open Access source. Please report problems or suggestions here. |
You cannot overwrite this file.
File usage on Commons
There are no pages that use this file.
Transcode status
Update transcode statusMetadata
This file contains additional information such as Exif metadata which may have been added by the digital camera, scanner, or software program used to create or digitize it. If the file has been modified from its original state, some details such as the timestamp may not fully reflect those of the original file. The timestamp is only as accurate as the clock in the camera, and it may be completely wrong.
Author | Patel T, Gullotti D, Hernandez P, O'Brien W, Capehart B, Morrison B, Bass C, Eberwine J, Abel T, Meaney D |
---|---|
Usage terms | http://creativecommons.org/licenses/by/4.0/ |
Image title | Scoring of object interaction. Overview of spatial object recognition scoring module, also applicable for other “interaction” tasks. Typically, a video recording contains multiple enclosed boxes and each box may contain variable number of objects in a particular spatial configuration. Using an initialization GUI, users define the number of boxes and the number of objects per box for an experiment and interactively define regions of interest. Once several videos are initialized, scoring is done as a batch job. An example of real-time tracking of mice is illustrated. Top panel: The centroid and head of the animal are automatically detected in each frame and marked in green and red dots. A vector in the direction of the animal's gaze is marked in red (vector magnitude increased for visual clarity). Interaction is scored when the gaze vector crosses a user-defined region of interest (glass and metal objects). The boundary of interacting object becomes highlighted in red during the movie. Bottom panel: The cumulative time spent in the arena during each bout of interaction. Movie is sped-up x3. Once videos are scored, users can quickly scroll through a set of frames labeled as interacting and verify the accuracy of the algorithm or remove any false-positives if needed. |
Software used | Xiph.Org libtheora 1.1 20090822 (Thusnelda) |
Date and time of digitizing | 2014-10-08 |