\section{Performance Evaluation} \label{sec:eval} This section presents the evaluation results for the proposed method, and how we collect the dataset. For training SRCNN model, we let a person lay on bed and randomly change pose or move his arms and legs. Collect the about 600 images from Grideye sensors and Lepton 3, and align them at the same timestamps. Figure~\ref{fig:resolution_compare} shows the result of SRCNN model. \begin{figure}[tbp] \centering \subfloat[Grideye Image]{ \includegraphics[width=0.3\columnwidth]{figures/LR.png} } \subfloat[SR Image]{ \includegraphics[width=0.3\columnwidth]{figures/SR.png} } \subfloat[Downscaled Lepton Image]{ \includegraphics[width=0.3\columnwidth]{figures/HR.png} } \caption{Result of SRCNN.} \label{fig:resolution_compare} \end{figure} For training the pose recognition model, we collect 200 images of lay on back and 400 images of lay on right or left side. The result shows that the accuracy of single frame detection can be improved about 5\% by SRCNN. We let a person lay on bed and change his pose every minute. The pose is repeating lay on back, lay on left, lay on back and lay on right. Our method will output the current pose every 10 seconds and detect the turning over. The accuracy of pose detection is 65\%, and the turning over detection has 50\% recall rate and 83\% precision.