|
\section{Method name}
|
|
\label{sec:design}
|
|
|
|
\subsection{System Architecture}
|
|
We designed a thermal-box to collect the data. It has four Grideye sensors on the
|
|
corners of a 10 cm square and a Lepton 3 at the central. Our method is made by
|
|
three parts. The first part is to train the SRCNN model with fused Grideye image
|
|
as low-resolution and downscaled Lepton 3 image. The second part, we use the
|
|
Super-resolution image to train a neural network model to recognize current pose
|
|
is lay on back or lay on side. The third part, because of noise and the residual
|
|
heat on bed after turn over, it is difficult to figure out the current pose. We
|
|
remove the noise by median filter, and determine the current pose according to
|
|
the trend of the possibility from recognition network.
|
|
|
|
\subsection{Grideye Data Fusion}
|
|
|
|
On the thermal-box, there are four Grideye sensors. At the beginning, we let
|
|
the thermal-box faces to an empty bed and records the background temperature.
|
|
All the following frames will subtract this background temperature. After that,
|
|
we resize four $8 \times 8$ Grideye images to $64 \times 64$ by bilinear
|
|
interpolation and than merge them dependence on the distance between thermal-box and
|
|
bed, width of sensor square and the FOV of Grideye sensor.
|
|
|
|
\begin{enumerate}
|
|
\item $D_b$ is the distance between bed and thermal-box.
|
|
\item $D_s$ is the width of sensor square also the distance between adjacent sensors.
|
|
\item $F$ is the FOV of Grideye sensor which is about 60 degree.
|
|
\item $Overlap = 64 - 64 \times (\frac{D_s}{2 \times D_b \times tan(\feac{F}{2})})$
|
|
\end{enumerate}
|
|
|