Kazutaka Takahashi

Research Associate Assistant Professor, University of Chicago

Dr. Takahashi obtained his Ph.D. from MIT in Estimation and Control in 2007 and completed his postdoctoral training at the University of Chicago n 2012. His interests are to understand dynamics of neural activities recorded from multiple sites within or across brain areas and the relationship between those dynamics to complex or naturalistic behaviors such as reach-to-grasp or feeding. Particularly he is interested in understanding 1) how populations of neural activities exhibit spatiotemporal dynamics at the microcircuit to mesoscopic level; and 2) how such neural activity dynamics can be related to naturalistic unconstrained behavior for control and pathological cases.

Past sessions

Summit 2018 Fiducial Marker Tracking Using Machine Vision

June 5, 2018 05:00 PM PT

Advanced machine vision is increasingly being used to investigate, diagnose, and identify potential remedies and their progressions for complex health issues. In this study, a behavioral neuroscientist at the University of Chicago and his colleagues have collaborated with Kavi Global to characterize 3D feeding behavior and its potential changes caused by neurological conditions such as ALS, Parkinson's disease, and stroke, or oral environmental changes such as tooth extraction and dental implants.

Videos of rodents feeding on kibble are recorded by a high-speed biplanar videofluoroscopy technique (XROMM). Their feeding behavior is then analyzed by tracking radio-opaque fiducial markers implanted in their head region. The marker tracking process, until now, was manual and tedious, and was not designed to process massive amounts of longitudinal data. This session will highlight a near-automated, deep learning-based solution for detecting and tracking fiducial markers in the videos, resulting in a more efficient and robust process, with a 300+ times reduction in data processing time compared to a manual use of the existing software.

Our approach involved the following steps: (i) Marker Detection-Deep Learning algorithms were used to identify the pixels corresponding to markers within each frame; (ii) Marker Tracking-Kalman filtering along with Hungarian algorithm were used for tracking markers across frames; (iii) 2D to 3D Conversion- sequence matching of videos recorded by both cameras, and triangulating marker locations in 2D track coordinates to generate 3D marker locations. The features extracted from videos would be used to characterize behaviorally relevant kinematic features such as rhythmic chewing or swallowing. The solution involved the use of TensorFlow-Python APIs and Spark.

Session hashtag: #AISAIS14