Also see my Google Scholar profile.
- M. Lorbach, R. Poppe, and R. C. Veltkamp, Interactive rodent behavior annotation in video using active learning, Multimedia Tools and Applications, 2019.
Manual annotation of rodent behaviors in video is time-consuming. By learning a classifier, we can automate the labeling process. Still, this strategy requires a sufficient number of labeled examples. Moreover, we need to train new classifiers when there is a change in the set of behaviors that we consider or in the manifestation of these behaviors in video. Consequently, there is a need for an efficient way to annotate rodent behaviors. In this paper we introduce a framework for interactive behavior annotation in video based on active learning. By putting a human in the loop, we alternate between learning and labeling. We apply the framework to three rodent behavior datasets and show that we can train accurate behavior classifiers with a strongly reduced number of labeled samples. We confirm the efficacy of the tool in a user study demonstrating that interactive annotation facilitates efficient, high-quality behavior measurements in practice.
@article{lorbach_interactive_2019,
author = {Lorbach, Malte and Poppe, Ronald and Veltkamp, Remco C.},
title = {Interactive rodent behavior annotation in video using active learning},
journal = {Multimedia Tools and Applications},
year = {2019},
doi = {10.1007/s11042-019-7169-4}
}
- M. Lorbach, E. I. Kyriakou, R. Poppe, E. A. van Dam, L. P. J. J. Noldus, and R. C. Veltkamp, Learning to Recognize Rat Social Behavior: Novel Dataset and Cross-Dataset Application, Journal of Neuroscience Methods, vol. 300, pp. 166–172, 2018.
Background: Social behavior is an important aspect of rodent models. Automated measuring tools that make use of video analysis and machine learning are an increasingly attractive alternative to manual annotation. Because machine learning-based methods need to be trained, it is important that they are validated using data from different experiment settings.
New Method: To develop and validate automated measuring tools, there is a need for annotated rodent interaction datasets. Currently, the availability of such datasets is limited to two mouse datasets. We introduce the first, publicly available rat social interaction dataset, RatSI.
Results: We demonstrate the practical value of the novel dataset by using it as the training set for a rat interaction recognition method. We show that behavior variations induced by the experiment setting can lead to reduced performance, which illustrates the importance of cross-dataset validation. Consequently, we add a simple adaptation step to our method and improve the recognition performance.
Comparison with Existing Methods: Most existing methods are trained and evaluated in one experimental setting, which limits the predictive power of the evaluation to that particular setting. We demonstrate that cross-dataset experiments provide more insight in the performance of classifiers.
Conclusions: With our novel, public dataset we encourage the development and validation of automated recognition methods. We are convinced that cross-dataset validation enhances our understanding of rodent interactions and facilitates the development of more sophisticated recognition methods. Combining them with adaptation techniques may enable us to apply automated recognition methods to a variety of animals and experiment settings.
@article{lorbach_learning_2018,
title = {Learning to {{Recognize Rat Social Behavior}}: {{Novel Dataset}} and {{Cross}}-{{Dataset Application}}},
doi = {10.1016/j.jneumeth.2017.05.006},
journal = {Journal of Neuroscience Methods},
author = {Lorbach, Malte and Kyriakou, Elisavet I. and Poppe, Ronald and {van Dam}, Elsbeth A. and Noldus, Lucas P. J. J. and Veltkamp, Remco C.},
year = {2018},
pages = {166--172},
volume = {300}
}
- M. Lorbach, Automated Recognition of Rodent Social Behavior, Ph.D. dissertation, Utrecht University, Utrecht, The Netherlands, 2017.
Social behavior is an important aspect of rodent models in behavioral neuroscience. Abnormal social behavior can indicate the onset of conditions such as Huntington’s disease. Studying social behavior requires objective quantification of the occurrence of specific rodent interactions. While this can be done manually by annotating occurrences in videos, manual annotation is a time-consuming and sometimes subjective process. Human observers need five to ten times as long as the length of the video. We therefore aim to reduce the manual effort by automating the annotation process. Automated annotation involves a computational model that distinguishes between the different behaviors using visual information from the video such as the relative motion and pose of the rodents. Before the model can be applied, it is trained with labeled examples of every behavior.
Rodent social behavior classification is a challenging task. The classification method has to deal with highly unbalanced occurrence rates of the different behaviors, causing less frequent behaviors to be underrepresented. Furthermore, behavior categories sometimes leave room for interpretation which causes even human observers to disagree on specific occurrences. Similarly, the precise temporal extent of interactions is often ambiguous. Finally, tracking multiple, visually similar rodents is a demanding task, in particular during close-contact interactions where occlusion is frequent. We find that limited tracking quality inhibits the recognition of close-contact interactions.
Once a classification model is trained for a set of interactions, it can be applied to novel videos recorded in the same environment. We demonstrate that it can be difficult to comply to the requirements of a constant environment because they include not only controllable, external factors such as illumination and cage size, but also variations in the tested animal population. In a cross-dataset experiment we use juvenile and adult rats to show that behavior variations due to age can reduce recognition accuracy. We argue for adequate cross-dataset validation and more research into adaptation methods to deal with such variations systematically.
If no previous classification model is available, for example because behavior categories are changed or added, the human observer is left with manual annotation. We aim to reduce the effort in such scenarios by formulating the annotation task as an interactive labeling problem. The human starts annotating examples of interactions while the classifier learns to distinguish them. Once the classifier has learned sufficiently, it may take over the annotation and alleviate the user from much of the work. To reduce the time further, we experiment with different strategies that guide the user to annotate particularly useful interaction examples. We demonstrate that placing the human in the annotation loop reduces the annotation time substantially compared to traditional, sequential labeling. Participants in a user study trained an accurate classifier in less than half an hour which allowed to propagate the annotations throughout the remaining two hours of the videos. This interactive annotation approach enables neuroscientists to analyze behavioral data quicker than before and to study previous data in new light with limited manual work.
@phdthesis{lorbach_automated_2017,
address = {Utrecht, The Netherlands},
type = {Ph.D. dissertation},
title = {Automated {{Recognition}} of {{Rodent Social Behavior}}},
url = {https://dspace.library.uu.nl/handle/1874/356090},
school = {Utrecht University},
author = {Lorbach, Malte},
year = {2017}
}
- M. Lorbach, R. Poppe, E. A. van Dam, L. P. J. J. Noldus, and R. C. Veltkamp, Transfer Learning for Rodent Behavior Recognition, in Proc. Conf. Measuring Behavior, 2016, pp. 461–469.
Most automated behavior recognition systems are trained and tested on a single dataset, which limits their application to comparable datasets. While training a new system for a new dataset is possible, it involves laborious annotation efforts. We propose to reduce the annotation effort by reusing the knowledge obtained from previous datasets and adapting the recognition system to the novel data. To this end, we investigate the use of transfer learning in the context of rodent behavior recognition. Specifically, we look at two transfer learning methods and examine the implications of their respective assumptions on synthetic data. We further illustrate their performance in transferring a rat action classifier to a mouse action classifier. The performance results in the transfer task are promising. The classification accuracy improves substantially with only very few labeled examples from the novel dataset.
@inproceedings{lorbach_transfer_2016,
title = {Transfer {{Learning}} for {{Rodent Behavior Recognition}}},
booktitle = {Proc. {{Conf}}. {{Measuring Behavior}}},
author = {Lorbach, Malte and Poppe, Ronald and {van Dam}, Elsbeth A. and Noldus, Lucas P. J. J. and Veltkamp, Remco C.},
year = {2016},
pages = {461--469}
}
- M. Lorbach, R. Poppe, E. A. van Dam, L. P. J. J. Noldus, and R. C. Veltkamp, Clustering-Based Active Learning in Unbalanced Rodent Behavior Data, in Proc. Visual Observation and Analysis of Vertebrate And Insect Behavior Workshop, 2016.
For the objective measurement of animal behavior from video, automated recognition systems are frequently employed. These systems rely on action models learned from labeled example videos. Manually labeling videos of animal behavior however is time consuming and error-prone. We propose to reduce the labeling effort by selecting suitable training instances from the unlabeled corpus and learn the action models iteratively in interaction with the user. Due to the typical imbalance of behavior datasets, a random selection strategy would fail to sample enough minority class examples. To address the imbalance we first find potential action prototypes by clustering the unlabeled data using a Dirichlet process Gaussian mixture model. We then sample instances from the prototypes and obtain a more balanced training set. We evaluate our system on two rat interaction datasets with different class priors and demonstrate an increased learning rate that is superior to the baseline.
@inproceedings{lorbach_clustering-based_2016,
title = {Clustering-Based {{Active Learning}} in {{Unbalanced Rodent Behavior Data}}},
url = {http://homepages.inf.ed.ac.uk/rbf/VAIB16PAPERS/vaiblorbach.pdf},
booktitle = {Proc. {{Visual}} Observation and Analysis of {{Vertebrate And Insect Behavior Workshop}}},
author = {Lorbach, Malte and Poppe, Ronald and {van Dam}, Elsbeth A. and Noldus, Lucas P. J. J. and Veltkamp, Remco C.},
year = {2016}
}
- M. Lorbach, R. Poppe, E. A. van Dam, L. P. J. J. Noldus, and R. C. Veltkamp, Automated Recognition of Social Behavior in Rats: The Role of Feature Quality, in Proc. Conf. Image Analysis and Processing, 2015, pp. 565–574.
We investigate how video-based recognition of rat social behavior is affected by the quality of the tracking data and the derived feature set. We look at the impact of two common tracking errors – animal misidentification and inaccurate localization of body parts. We further examine how the complexity of representing the articulated body in the features influences the recognition accuracy. Our analyses show that correct identification of the rats is required to accurately recognize their interactions. Precise localization of multiple body points is beneficial for recognizing interactions that are described by a distinct pose. Including pose features only leads to improvement if the tracking algorithm can provide that data reliably.
@inproceedings{lorbach_automated_2015,
title = {Automated {{Recognition}} of {{Social Behavior}} in {{Rats}}: {{The Role}} of {{Feature Quality}}},
doi = {10.1007/978-3-319-23234-8_52},
booktitle = {Proc. {{Conf}}. {{Image Analysis}} and {{Processing}}},
author = {Lorbach, Malte and Poppe, Ronald and {van Dam}, Elsbeth A. and Noldus, Lucas P. J. J. and Veltkamp, Remco C.},
year = {2015},
pages = {565--574}
}
- M. Lorbach, S. Höfer, and O. Brock, Prior-Assisted Propagation of Spatial Information for Object Search, in Proc. Conf. Intelligent Robots and Systems (IROS), 2014, pp. 2904–2909.
We propose a novel method for object search in realistic environments. We formalize object search as a probabilistic inference problem over possible object locations. The method makes two contributions. First, we identify five priors, each capturing structure inherent to the physical world that is relevant to the search problem. Second, we propose a formalization of the object search problem that leverages these priors. Our formalization in form of a probabilistic graphical model is capable of combining the various sources of information into a consistent probability distribution over object locations. The formalization allows us to sharpen the distribution by propagating the knowledge across locations. We employ the reasoning method to select actions of a searching robot in a simulated environment and show that it results in more efficient object search.
@inproceedings{lorbach_prior-assisted_2014,
title = {Prior-Assisted Propagation of Spatial Information for Object Search},
doi = {10.1109/IROS.2014.6942962},
booktitle = {Proc. {{Conf}}. {{Intelligent Robots}} and {{Systems}} ({{IROS}})},
author = {Lorbach, Malte and H{\"o}fer, Sebastian and Brock, Oliver},
year = {2014},
pages = {2904--2909}
}
- R. Martin-Martin, M. Lorbach, and O. Brock, Deterioration of Depth Measurements Due to Interference of Multiple RGB-D Sensors, in Proc. Conf. Intelligent Robots and Systems (IROS), 2014, pp. 4205–4212.
Depth sensors based on projected structured light have become standard in robotics research. However, when several of these sensors share the same workspace, the measurement quality can deteriorate significantly due to interference of the projected light patterns. We present a comprehensive study of this effect in Kinect and Xtion RGB-D sensors. In particular, our study investigates the effect of measurement failure due to interference. Our experiments show that up to 95% of the depth measurements in the interference image region can disappear when two RGB-D sensors interfere with each other. We determine the severity of interference as a function of relative sensor placement and propose simple guidelines to reduce the impact of sensor interference. We show that these guidelines greatly increase the robustness of RGB-D-based SLAM.
@inproceedings{martin-martin_deterioration_2014,
title = {Deterioration of Depth Measurements Due to Interference of Multiple {{RGB}}-{{D}} Sensors},
doi = {10.1109/IROS.2014.6943155},
booktitle = {Proc. {{Conf}}. {{Intelligent Robots}} and {{Systems}} ({{IROS}})},
author = {Martin-Martin, Roberto and Lorbach, Malte and Brock, Oliver},
year = {2014},
pages = {4205--4212}
}