近期研究 Recent Research
Self-supervised Guided Modality Disentangled Representation Learning for Multimodal Sentiment Analysis and Schizophrenia Assessment
Chronic mental disorders, such as schizophrenia and depression, have a significant impact on individuals’ well-being. Multimodal sentiment analysis has emerged as a promising approach to improve the diagnosis and treatment of these disorders. This thesis presents a novel learning framework that effectively processes multimodal inputs from patients for sentiment analysis, aiming to alleviate the burden on healthcare practitioners. Our approach leverages disentangled representation learning, incorporating consistency and disparity constraints to address modality heterogeneity. We also introduce a self-supervised learning approach to guide modality-specific representation learning, preventing the acquisition of meaningless features. Additionally, we propose a text-centric fusion to effectively fuse the acquired disentangled representations into a comprehensive multimodal representation. We evaluate our model on three publicly available benchmark datasets for multimodal sentiment analysis and a privately collected dataset focusing on schizophrenia counseling. The experimental results demonstrate state-of-the-art performance across various metrics on the benchmark datasets, surpassing related works. Furthermore, our learning algorithm shows promising performance in real-world applications, outperforming our previous work and achieving significant progress in schizophrenia assessment.
Self-Knowledge Distillation and Re-weighted Average Pooling for Person Re-identification
In general, Re-ID is challenged by background clutter, occlusion, different camera viewpoints and multiple human identities that with similar human appearances. These factors hinder the process of extracting robust and discriminate representations. We propose KD-Net, which is a self-knowledge distilled framework, to distill knowledge of inter-class relationships within the network itself. The learned knowledge in the deeper portion of the networks can be transmitted into the shallow ones by the proposed knowledge receiver (KR). We also integrate the spatial non-local attention (SNLA) mechanism into the network to aggregate semantically similar pixels in the spatial domain. With the aid of SNLA, long-range dependencies in feature maps can be captured. For tackling the background clutter issue, we propose re-weighted average pooling (RAP) that takes advantage of average pooling and max pooling. RAP can enlarge the difference of response value between salient points and unimportant regions, as well as aggregates the salient pixels. In the experiment, it is shown that the proposed method in this thesis outperforms the state-of-the-art methods.
Mental Disorder Detection for Schizophrenia Patients
via Deep Visual Perceptron
Schizophrenia is a mental illness that will progressively change a person's mental state and cause serious social problems. People with schizophrenia are unable to express their real thinking ordinarily, and their behaviors are often different from normal ones. In this work, our main goal aims to provide an assessment for schizophrenia patients by detecting mental disorders about the mood aspect for patients during the counseling because the patient’s mood is often unstable. Since the mental disorder about the mood aspect is highly correlated with emotion and depression, we can naturally employ techniques of emotion recognition and depression estimation via visual perception to infer mental states of patients, further realizing a mental disorder detection system. Our system consists of two phases, including learning and detection. For the learning part, we propose a multi-task learning framework to learn a robust model to solve the limitation of the conventional facial expression recognition systems in inferring the emotional status and depressive level of humans. On the other hand, for the detection phase, we first employ the learned robust multi-task model to infer the mental state of the patient, and then follow the observation about human emotion in cognitive science and the nature of schizophrenia to design an algorithm to detect the mental disorders, including mania and depression.
Temporal Pyramid Networks with Enhanced Relation Mechanism for Online Action Detection
Online action detection aims to identify actions as soon as each video frame arrives from a streaming video. An input video sequence contains not only the action of interest frames but also background (non-action) and other irrelevant frames. Those frames will cause the network to learn less discriminative features. This thesis explores an Enhanced Relation Layer (ERL) embedded in a Temporal Convolution Network (TCN), which updates the features according to their relevance to the action of interest and the actioness score. Relevant features should be considered essential and irrelevant features unessential. ERL gives each time-step a relevance score implying the relevance to the action of interest, and an actioness score indicates the probability of action occurrence. The scores guide the network to focus on those more essential features and learn a more discriminative representation for identifying the action that happens in the current time-step. The temporal information of an input sequence is learned from TCN. The output feature of each layer in TCN has different receptive fields, focusing on different temporal scales. However, lower-level features are semantically weak. Therefore, we design a Temporal Pyramid Network with a top-down architecture to transform the strong semantic ability from higher-levels to lower-levels, building a multi-temporal-scale feature sequence to further identify actions with different temporal lengths. In the experiment part, we apply our method to two benchmark datasets, THUMOS-14, and TVSerires. Our method achieves superior performance as compared with baseline networks and promising results as compared with the state-of-the-art works.
Person Re-Identification Robust to Illumination Change with Clustering-based Loss Function
Recognizing person identifies from different viewpoints is a challenging problem since each shape of the human body may look completely different. In addition, the locations and levels of light illumination can be different among cameras in the field of actual application. In this paper, we propose an Illumination-Invariant Feature (IIF) extracted from synthetic data which assists to train the model. Besides, after analyzing the recent literature on pedestrian re-recognition, we also found that most of the current researches apply the metric loss function to optimize the model with an appropriate threshold to distinguish the positive samples from the negative ones. Despite the metric loss function can perform objective distinction as mentioned, it is notorious of its high complexity that makes the training process lengthy. To tackle time complexity caused by the metric loss function, we propose the clustering-based loss function. In the experiment, the proposed method outperforms the state-of-the-art methods on resolving person re-identification problems.
Partially Transferred Convolution Neural Network with Cross-Layer Inheriting for Posture Recognition from Top-view Depth Camera
We proposes a new method for human posture recognition from top-view depth maps on small training datasets. There are two strategies developed to leverage the capability of convolution neural network (CNN) in mining the fundamental and generic features for recognition. First, the early layers of CNN should serve the function to extract feature without specific representation. By applying the concept of transfer learning, the first few layers from the pre-learned VGG model can be used directly without further fine-tuning. To alleviate the computational loading and to increase the accuracy of our partially transferred model, a cross-layer inheriting feature fusion (CLIFF) is proposed by using the information from the early layer in fully connected layer without further processing. The experimental result shows that combination of partial transferred model and CLIFF can provide better performance than VGG16 model with re-trained FC layer and other hand-crafted features like RBPs.
Contents:
1. For human posture recognition for top-view depth images (sit, stand, band, lying, squat…)
2. Only partial of CNN parameters are transferred from original VGG16 model
3. A Cross-layer Inheriting feature fusion (CLIFF) is proposed for gaining more information from early layers
4. Less layers need to be trained, smaller network ,but still 5-6% accuracy improvement then VGG16 and other hand-crafted feature
Contents:
1. For human posture recognition for top-view depth images (sit, stand, band, lying, squat…)
2. Only partial of CNN parameters are transferred from original VGG16 model
3. A Cross-layer Inheriting feature fusion (CLIFF) is proposed for gaining more information from early layers
4. Less layers need to be trained, smaller network ,but still 5-6% accuracy improvement then VGG16 and other hand-crafted feature
Daily Activity Detection System Using Top-View
Depth Camera for Smart Home Environment
We propose a novel indoor daily activity detection system which can automatically keep the log of users’ daily life. The hardware setting here adopts top-view depth cameras which makes our system less privacy sensitive. Moreover, in contrast with the traditional setting using side-view or surveillance-view RGB camera, our camera setting could avoid the problems of illuminance change.
The goal of action detection is to identify where and when the actions of interest happened in a video stream. In this work, we regard the series of images of an action as a set of key-poses which are arranged in a certain temporal order. To model an action, we use the latent SVM framework to jointly learn the appearance of the key-poses and the temporal locations of the key-poses.
We use recall-precision curve and average precision (AP) to validate the proposed daily activity detection system and the experimental results show the accuracy and robustness of our system.
The goal of action detection is to identify where and when the actions of interest happened in a video stream. In this work, we regard the series of images of an action as a set of key-poses which are arranged in a certain temporal order. To model an action, we use the latent SVM framework to jointly learn the appearance of the key-poses and the temporal locations of the key-poses.
We use recall-precision curve and average precision (AP) to validate the proposed daily activity detection system and the experimental results show the accuracy and robustness of our system.
Journal
- C. Y. Chuang, Y. T. Lin, C. C. Liu, L. E. Lee, H. Y. Chang, A. S. Liu, S. H. Hung, L. C. Fu, “Multimodal Assessment of Schizophrenia Symptom Severity from Linguistic, Acoustic and Visual Cues.,“ IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2023
- B. J. Lin, Y. T. Lin, C. C. Liu, L. E. Lee, C. Y. Chuang, A. S. Liu, S. H. Hung, L. C. Fu, “Mental status detection for schizophrenia patients via deep visual perception.,” IEEE Journal of Biomedical and Health Informatics, 2022
- C. M. Huang and L. C. Fu, "Multi-target visual tracking based effective surveillance with cooperation of multiple active cameras," IEEE Transactions on System, Man and Cybernetics - Part B: Cybernetics, vol. 41, no. 1, pp. 234-247, 2010.
- Cheng-Ming Huang; David Liu; Li-Chen Fu, "Visual tracking in cluttered environments using the visual probabilistic data association filter," IEEE Transactions on Robotics, Vol. 22, pp. 1292-1297, 2006.
Conference
- T. H. Yeh, C. Kuo, A. S. Liu, Y. H. Liu, Y. H. Yang, Z. J. Li, J. T. Shen, L. C. Fu, “ResFlow: Multi-tasking of sequentially pooling spatiotemporal features for action recognition and optical flow estimation,“ IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2019
- Y. H. Yang, A. S. Liu, Y. H. Liu, T. H. Yeh, Z. J. Li, L. C. Fu, “Cross-View Action Recognition Using View-Invariant Pose Feature Learned from Synthetic Data with Domain Adaptation,” 4th Asian Conference on Computer Vision (ACCV), 2018
- Z. J. Li, Y. H. Liu, A. S. Liu, Y. H. Yang, T. H. Yeh, L. C. Fu, ”Temporal-contrastive appearance network for facial expression recognition,“ IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2018
- A. S. Liu, Z. J. Li, T. H. Yeh, Y.H. Yang, L. C. Fu, "Partially Transferred Convolution Neural Network with Cross-Layer Inheriting for Posture Recognition from Top-view Depth Camera," IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2017
- T. W. Hsu, Y. H. Yang, T. H. Yeh, A. S. Liu, L.C. Fu, Y. C. Zeng, "Privacy free indoor action detection system using top-view depth camera based on key-poses," IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2016.
- S.-C. Lin, A.-S. Liu, T.-W. Hsu, L.-C. Fu, "Representative Body Points on Top-View Depth Sequences for Daily Activity Recognition," IEEE International Conference on System, Men, and Cybernetics (SMC) 2015.
- T. E. Tseng, A. S. Liu, P. H. Hsiao, C. M. Huang, and L. C. Fu, "Real-Time People Detection and Tracking for Indoor Surveillance Using Multiple Top-View Depth Cameras," in Intelligent Robots and Systems (IROS), 2014 ,IEEE/RSJ International Conference on, 2014.
- B. J. Chen, C. M. Huang, A. S. Liu, T. E. Tseng, and L. C. Fu, "Hands Tracking with Self-occlusion Handling in Cluttered Environment," in Control Conference (ASCC),9th Asian,2013.
- B. J. Chen, C. M. Huang, T. E. Tseng, and L. C. Fu, "Robust Head and Hands Tracking with Occlusion Handling for Human Machine Interaction," in Intelligent Robots and Systems (IROS), IEEE/RSJ International Conference on,2012.
- Y. R. Chen, C. M. Huang and L. C. Fu, "Visual tracking of human head and arms with a single camera," in Proc. IEEE Int. Conf. Intelligent Robots and Systems, pp. 3416-3421, 2010.
- Y. R. Chen, C. M. Huang and L. C. Fu, "Upper body tracking for human-machine interaction with a moving camera," in Proc. IEEE Int. Conf. Intelligent Robots and Systems, pp. 1917-1922, 2009.
- Y. T. Lin, C. M. Huang, Y. R. Chen, and L. C. Fu, "Real-time face tracking and pose estimation with partitioned sampling and relevance vector machine," in Proc. IEEE Int. Conf. Robotics and Automation, pp. 453-458, 2009.
- C. M. Huang, Y. R. Chen and L. C. Fu, "Real-time object detection and tracking on a moving camera platform," in Proc. ICCAS-SICE International Joint Conference, pp. 717-722, 2009.
- C. J. Song, C. M. Huang and L. C. Fu, "Human tracking by importance sampling particle filtering on omnidirectional camera platform," in Proc. 17th Int. Federation of Automatic Control, pp. 6496-6501, 2008.
- C. M. Huang, Y. T. Lin and L. C. Fu, "Effective visual surveillance with cooperation of multiple active cameras," in Proc. IEEE Int. Conf. Systems, Man, and Cybernetics, pp. 2718-2723, 2008.
- C. M. Huang, C. W. Lai and L. C. Fu, "Real-time multitarget visual tracking with an active camera," in Proc. IEEE Int. Conf. Intelligent Robots and Systems, pp. 2741-2746, 2007.
- Chuan-Wen Lai; Cheng-Ming Huang; Li-Chen Fu, "Multitarget visual tracking by Markov chain Monte Carlo based particle filtering with occlusion handling," ICAR, accepted, 2007.
- Cheng-Ming Huang; Chuan-Wen Lai; Li-Chen Fu, "Real-time multitarget visual tracking with an active camera," IEEE International Conference on Intelligent Robots and Systems, accepted, 2007.
- Chuan-Wen Lai; Cheng-Ming Huang; Li-Chen Fu, "Multi-targets tracking using separated importance sampling particle filters with joint image likelihood," IEEE International Conference on Systems, Man, and Cybernetics, Vol. 6, pp. 5179-5184, 2006.
- Yu-Shan Cheng; Cheng-Ming Huang; Li-Chen Fu, "Multiple people visual tracking in a multi-camera system for cluttered environments," IEEE International Conference on Intelligent Robots and Systems, pp. 675-680, 2006.
- Cheng-Ming Huang; Chuan-Wen Lai; Li-Chen Fu, "Visual tracking with probabilistic data association filter based on the circular Hough transform," IEEE International Conference on Robotics and Automation, pp. 4094-4099, 2006.
- Cheng-Ming Huang; Jong-Hann Jean; Yu-Shan Cheng; Li-Chen Fu, "Visual tracking and servoing system design for circling a target of an air vehicle simulated in virtual reality," IEEE International Conference on Intelligent Robots and Systems, pp. 2393–2398, 2005.
- Cheng-Ming Huang; Su-Chiun Wang; Chin-Fu Chang; Chin-I Huang; Yu-Shan Cheng; Li-Chen Fu, "An air combat simulator in the virtual reality with the visual tracking system and force-feedback components," Control Applications, 2004. Proceedings of the 2004 IEEE International Conference on Volume 1, 2-4 Sept. 2004 Page(s):515 - 520 Vol.1.
- Pei-Ying Chen; Cheng-Ming Huang; Li-Chen Fu, "A robust visual servo system for tracking an arbitrary-shaped object by a new active contour method,"American Control Conference, 2004. Proceedings of the 2004 Volume: 2 30 June-2 July 2004 Page(s): 1516- 1521 vol.2.
- Cheng-Ming Huang; Su-Chiun Wang; Li-Chen Fu; Pei-Ying Chen; Yu-Shan Cheng, "A robust visual tracking of an arbitrary-shaped object by a new active contour method for a virtual reality application,"Networking, Sensing and Control, 2004 IEEE International Conference on Volume: 1 March 21-23, 2004 Page(s): 64- 69.
- Teng-Kai Kuo; Cheng-Ming Huang; Li-Chen Fu; Pei-Ying Chen, "A robust servo based headtracker with auto-zooming in cluttered environment,"American Control Conference, 2003. Proceedings of the 2003 Volume: 4 4-6 June 2003 Page(s): 3107- 3112 vol.4.
- Ten-Kai Kuo; Li-Chen Fu; Jong-Hann Jean; Pei-Ying Chen; Yu-Ming Chan, "Zoom-based head tracker in complex environment,"Control Applications, 2002. Proceedings of the 2002 International Conference on Volume: 2 2002 Page(s): 725- 730 vol.2.
- En-Wei Huang; Wei-Guan Yau; Li-Chen Fu, "An edge based visual tracking for target within complex environment,"American Control Conference, 2000. Proceedings of the 2000 Volume 3, 28-30 June 2000 Page(s):1993 - 1997 vol.3.
Master Thesis
- 張鑫揚Hsin-Yang Chang, "用於多模態情感分析和思覺失調症評估的 自監督引導之模態解耦表徵學習Self-supervised Guided Modality Disentangled Representation Learning for Multimodal Sentiment Analysis and Schizophrenia Assessment," Master Thesis, Institute of Electrical Engineering, National Taiwan University, R.O.C., 2023.
- 宋體淮Ti-Huai Song, "以運動與記憶增強之網路用於線上即時之時空間動作偵測Motion & Memory-Augmented Network for Online Real-Time Spatio-Temporal Action Detection," Master Thesis, Institute of Electrical Engineering, National Taiwan University, R.O.C., 2023.
- 吳承軒Cheng-Hsuan Wu, "透過域解耦暨輔以源域引導取樣之對比學習於無監督領域自適應之行人重識別系統Domain Disentanglement and Contrastive learning with Source-Guided Sampling for Unsupervised Domain Adaptation Person Re-identification," Master Thesis, Institute of Electrical Engineering, National Taiwan University, R.O.C., 2022.
- 周靈風Lin-Feng Zhou, "基於時序多模態資料用於疼痛强度辨識的混合深度神經網路Hybrid Deep Neural Networks for Pain Intensity Estimation via Temporal Multimodalities," Master Thesis, Institute of Electrical Engineering, National Taiwan University, R.O.C., 2021.
- 沈睿庭 Jui-Ting Shen, "具有增強關聯機制用於線上動作檢測之時序金字塔網路Temporal pyramid networks with enchanced relation mechanism for online action detection," Master Thesis, Institute of Electrical Engineering, National Taiwan University, R.O.C., 2021.
- 郭權 Chuan Kuo, "基於語義的人像解析及注意機制之強化式行人再識別系統Enchanced person re-identification based on semantic human parsing with attention mechanism," Master Thesis, Institute of Electrical Engineering, National Taiwan University, R.O.C., 2020.
- 劉宇閎 Yu-Hung Liu, "對光線變化具有強健適應的人物重新識別系統輔以基於群聚的損失函數Person re-identification robust to illumination change with clustering-based loss function," Master Thesis, Institute of Electrical Engineering, National Taiwan University, R.O.C., 2019.
- 黎子駿 Zi-Jun Li, "使用深度時域對比網絡之人臉情緒辨識Deep temporal-contrastive network for facial expression recognition," Master Thesis, Institute of Electrical Engineering, National Taiwan University, R.O.C., 2018.
- 葉佐新 Tso-Hsin Yeh, "以全域性空間時間特徵輔以序列式生成機制完成多工之動作辨識及產生光流影像Using global spatiotemporal features with sequentially pooling mechanism for multi-tasking of action recognition and optical flow estimation," Master Thesis, Institute of Electrical Engineering, National Taiwan University, R.O.C., 2018.
- 楊侑寰 Yu-Huan Yang, "使用合成資料搭配領域適應學習無關視角姿勢特徵進行跨視角動作辨識Cross-view action recognition using view-invarinat pose feature learned form synthetic data with domain adaptation," Master Thesis, Institute of Electrical Engineering, National Taiwan University, R.O.C., 2018.
- 許唐瑋 Tang-Wei Hsu, "使用俯視深度攝影機之智慧家庭日常動作偵測系統Daily Activity Detection System Using Top-View Depth Camera for Smart Home Environment," Master Thesis, Institute of Electrical Engineering, National Taiwan University, R.O.C., 2016.
- 林叔君 Shu-Chun Lin, "利用分層俯視角深度特徵應用於日常活動辨識Daily Activity Recognition Using Features from Layered Top-View Depth Information, " Master Thesis, Institute of Electrical Engineering, National Taiwan University, R.O.C., 2015.
- 曾廷恩 Ting-En Tseng, "利用多台俯視之深度相機進行即時人型偵測與追蹤之大型室內監視系統 Real-time People Detection and Tracking for Large Indoor Surveillance Using Multiple Top-view Depth Cameras, " Master Thesis, Institute of Electrical Engineering, National Taiwan University, R.O.C., 2013.
- 陳柏錚 Bor-Jeng Chen, "在複雜背景下具自遮蔽處理之雙手追蹤系統Hands tracking with Self-occlusion Handling in Cluttered Environment, "Master Thesis, Institute of Electrical Engineering, National Taiwan University, R.O.C., 2012.
- 杜明翰 Ming-Han Tu, "運用於單相機之三維人體手臂追蹤系統 Three-dimensional Human Arms Online Tracking with a Single Camera, " Master Thesis, Institute of Electrical Engineering, National Taiwan University, R.O.C., 2011.
- 陳又生 Yu-Sheng Chen, "多相機影像監控系統之高效率多目標物一致性標籤 Efficient consistent labeling in visual surveillance system with multiple cameras," Master Thesis, Institute of Electrical Engineering, National Taiwan University, R.O.C., 2009.
- 陳羿如 Yi-Ru Chen, "人體上半身姿態追蹤系統應用於移動式平台之人機互動 Human robot interaction with motion platform, " Master Thesis, Institute of Electrical Engineering, National Taiwan University, R.O.C., 2008.
- 宋昭蓉 Chao-Jung Song, "循序權重取樣粒子濾波法實現全方位相機之多目標物影像追蹤 Human Tracking Using Sequential Importance Sampling Particle Filter by Omnidirectional Camera," Master Thesis, Institute of Electrical Engineering, National Taiwan University, R.O.C., 2007.
- 賴傳文 Chuan-Wen Lai, "具遮蔽處理之貝氏濾波法實現主動相機平台之多目標物影像追蹤 Multi-Target Visual Tracking by Bayesian Filtering with Occlusion Handling on an Active Camera Platform," Master Thesis, Institude of Electrical Engineering, National Taiwan University, R.O.C., 2006.
- 鄭宇珊 Yu-Shan Cheng, "複雜背景下多相機之多目標物影像追蹤系統
Multiple People Visual Tracking in a Multi-Camera System for Cluttered Environment," Master Thesis, Institude of Electrical Engineering, National Taiwan University, R.O.C., 2005. - 劉大元 Da-Yuan Liu, "雙軸相機平台在複雜環境下之即時影像追蹤 Real-Time Visual Tracking in cluttered Environment with a Pan-Tilt Camera,"Master Thesis, Instituteof Electrical Engineering, National Taiwan University, R.O.C., 2004.
- 陳佩穎 Pei-Ying Chen, "以新型動態輪廓技術完成可追蹤任意形狀物體之強健影像伺服系統 A Robust Visual Servo System for Tracking an Arbitrary-Shaped Object by a New Active Contour Method," Master Thesis, Institute of Electrical Engineering, National Taiwan University, R.O.C., 2003.
- 郭騰凱 Teng-Kai Guo, "在複雜背景下以強健視覺伺服為基礎可自動調變倍率之頭部追蹤系統 A Robust Visual Servo Based Headtracker with Auto-Zooming in Cluttered Environment," Master Thesis, Institute of Electrical Engineering, National Taiwan University, R.O.C., 2002.
- 丘偉源 Wei-Yuan Qiu, "以真實飛行物體為目標之視覺伺服追蹤系統之設計與實務 Design and Implementation of Visual Servoing System for Realistic Air Target Tracking," Master Thesis, Institute of Electrical Engineering, National Taiwan University, R.O.C., 2000.
- 黃恩暐 En-Wei Huang, "以偵測目標物邊緣分佈為基礎之視覺追蹤系統 An Edge Based Visual Tracking for Target within Complex Environment,"Master Thesis, Institute of Electrical Engineering, National Taiwan University, R.O.C., 1999.
- 陳治宇 Zhi-Yu Chen, "以智慧型單眼即時影像追蹤系統與飛機姿態辨識為基礎之目標物五維軌跡檢測 5D Target Trajectory Detection via Intelligent Monocular Visual System in Real-Time with Air-Target Orientation Recognition," Master Thesis, Institute of Electrical Engineering, National Taiwan University, R.O.C., 1998.
Dissertation
- C. M. Huang ( 黃正民 ), "以貝氏濾波器及可動式相機進行之影像追蹤暨其應用Visual Tracking and Its Applications by Bayesian Filtering with Active Cameras," Dissertation, Institude of Electrical Engineering, National Taiwan University, R.O.C., 2009.
Patent
- Inventor : Li-Chen Fu, Ting-En Tseng, An-Sheng Liu, Po-Hao Hsiao
Title : Human image tracking system, and human image detection and human image tracking methods thereof;
Invention Patent No. : US9317765 B2; 2016/04/19. - 發明人 : 傅立成, 曾廷恩, 劉安陞, 蕭伯豪
發明名稱 : 人型影像追蹤系統及其人型影像偵測方法與追蹤方法
證書案號 : 中華民國發明第I503756號; 2015/10/11.