From e1c7bfe8c01df7542186e46838ead36a1963068f Mon Sep 17 00:00:00 2001 From: Hobe Date: Mon, 8 Jun 2020 04:43:04 +0000 Subject: [PATCH] Update design. git-svn-id: http://newslabx.csie.ntu.edu.tw/svn/Ginger@74 5747cdd2-2146-426f-b2b0-0570f90b98ed --- trunk/RTCSA_SS/03Design.tex | 36 +++++++++++++++++++----------------- 1 file changed, 19 insertions(+), 17 deletions(-) diff --git a/trunk/RTCSA_SS/03Design.tex b/trunk/RTCSA_SS/03Design.tex index 04d3299..859f076 100644 --- a/trunk/RTCSA_SS/03Design.tex +++ b/trunk/RTCSA_SS/03Design.tex @@ -1,17 +1,17 @@ -\section{Method name} +\section{System Architecture} \label{sec:design} -\subsection{System Architecture} We designed a thermal-box to collect the data. It has four Grideye sensors on the corners of a 10 cm square and a Lepton 3 at the central. Figure~\ref{fig:method} shows the system of our method. It consists four parts. The first part is to fuse multiple -Grideye image, since the resolution of single Grideye sensor only has 64 pixels. The +data from Grideye sensors into a low-resolution image, since the resolution of a +single Grideye sensor too low to make a decision. The second part, we train the SRCNN model with fused Grideye image as low-resolution and downscaled Lepton 3 image as high-resolution image. The third part, we use the Super-resolution image to train a neural network model for recognizing current pose -is lay on back or lay on side. The last part, because of noise and the residual -heat on bed after turn over, it is difficult to figure out the current pose. We +is lay on back or lay on side. The last part, to reduce the noise and effect cause by +the residual heat on bed after turning over. We remove the noise by median filter, and determine the current pose according to the trend of the possibility from recognition network. @@ -30,7 +30,8 @@ the thermal-box faces to an empty bed and records the background temperature. All the following frames will subtract this background temperature. After that, we resize four $8 \times 8$ Grideye images to $64 \times 64$ by bilinear interpolation and then merge them dependence on the distance between thermal-box and -bed, width of sensor square and the FOV of Grideye sensor. +bed, distance between sensors and the FOV of Grideye sensor. In our case, $D_B$ is +150 cm, and $D_s$ is 10 cm. \begin{enumerate} \item $D_b$ is the distance between bed and thermal-box. @@ -39,27 +40,28 @@ bed, width of sensor square and the FOV of Grideye sensor. \item $Overlap = 64 - 64 \times (\frac{D_s}{2 \times D_b \times tan(\frac{F}{2})})$ \end{enumerate} -\subsection{Pose determination} +\subsection{Turning Over Determination} -We train a SRCNN model by the fused Grideye data and downscaled Lepton 3 image, -and use it to upscale all following frames to SR frames. We labeled some SR frames -to two categories. One is lay on back and the other is lay on side. Since the input +We train a SRCNN model by the fused Grideye image and downscaled Lepton 3 image, +and use it to enhance all following Grideye frames to SR frames. We labeled some SR frames +into two categories, lay on back and lay on side. Since the input data is very small, we use a neural network consist one 2D convolution layer, one 2D max pooling, one flatten and one densely-connected layer. The possibility of -output has a very large various just after turning over because the model cannot +output has a very large various just after turn over because the model cannot distinguish the residual heat on bed and the person as Figure~\ref{fig:residual_heat} shown. This situation will slowly disappear after one or two minutes. To determination the pose, first we use a median filter with a window size of five to filter out the noise. Then, find the curve hull line of the upper bound and -lower bound of the data. Finally, calculate the middle line of upper bound and lower bound. -Figure~\ref{fig:trend} shows the data and these lines. +lower bound of the data. Finally, calculate the middle line of upper bound and +lower bound, and regrad it as the trend of the pose changing. Figure~\ref{fig:trend} +shows the filitered data and these lines. We divide every data into 10 second time windows. If the middle line of the time window -is at the top one fifth, it is a lay on back. If it is at the bottom one fifth, -it is a lay on side. If the trend of line is going up, it is lay on back. Otherwise, it -is lay on side. To guarantee the confidence of result, we will only trust the pose if -there are three continuously same output. +is at the top one fifth, or the trend is going up, it is a lay on back. If it is at the +bottom one fifth, or the trend is going down, it is a lay on side. If there are three +continuously same poses, and different from the last turning over, it will be count as +another turning over. \begin{figure}[ht] \centering