Browse Source

Fix some typo.

git-svn-id: http://newslabx.csie.ntu.edu.tw/svn/Ginger@71 5747cdd2-2146-426f-b2b0-0570f90b98ed
master
Hobe 4 years ago
parent
commit
5050e51e20
4 changed files with 9 additions and 9 deletions
  1. +2
    -2
      trunk/RTCSA_SS/01Introduction.tex
  2. +1
    -1
      trunk/RTCSA_SS/02Background.tex
  3. +4
    -4
      trunk/RTCSA_SS/03Design.tex
  4. +2
    -2
      trunk/RTCSA_SS/04Evaluation.tex

+ 2
- 2
trunk/RTCSA_SS/01Introduction.tex View File

@ -5,7 +5,7 @@ The turn over frequency while sleeping is an important index to quantify the
health of elderly. Many wearable devices can also achieve the same purpose, but
many study show that the elderly feel uncomfortable with wearing such devices
all days.\textcolor{red}{(source?)}. By the low resolution thermal camera, we
can obtain the daily activities informations, but not reveal too much privacy
can obtain the daily activities information, but not reveal too much privacy
like the RGB camera.
{\bf Contribution}
@ -16,7 +16,7 @@ Super-resolution techniques, and our method, we can have 50\% recall rate and 83
on turning over detection.
The remaining of this paper is organized as follow. Section~\ref{sec:bk_related}
The remaining of this paper is organized as follows. Section~\ref{sec:bk_related}
presents background for developing the methods. Section~\ref{sec:design} presents
the system architecture, and the developed mechanisms. Section~\ref{sec:eval}
presents the evaluation results of proposed mechanism and Section~\ref{sec:conclusion} summaries our works.


+ 1
- 1
trunk/RTCSA_SS/02Background.tex View File

@ -26,7 +26,7 @@ patches.
%% \end{figure}
\subsection{Thermal cameras}
In this work, we use two different resolution thermal camera to play the role
In this work, we use two different resolution thermal cameras to play the role
of low-resolution and high-resolution camera. For low-resolution camera, we use
Grid-EYE thermal camera. Grid-EYE is a thermal camera that can output
$8 \times 8$ pixels thermal data with $2.5^\circ C$ accuracy and $0.25^\circ C$


+ 4
- 4
trunk/RTCSA_SS/03Design.tex View File

@ -5,7 +5,7 @@
We designed a thermal-box to collect the data. It has four Grideye sensors on the
corners of a 10 cm square and a Lepton 3 at the central. Figure~\ref{fig:method} shows
the system of our method. It consists four parts. The first part is to fuse multiple
Grideye image, since the resolution of singel Grideye sensor only has 64 pixels. The
Grideye image, since the resolution of single Grideye sensor only has 64 pixels. The
second part, we train the SRCNN model with fused Grideye image
as low-resolution and downscaled Lepton 3 image as high-resolution image.
The third part, we use the
@ -29,7 +29,7 @@ On the thermal-box, there are four Grideye sensors. At the beginning, we let
the thermal-box faces to an empty bed and records the background temperature.
All the following frames will subtract this background temperature. After that,
we resize four $8 \times 8$ Grideye images to $64 \times 64$ by bilinear
interpolation and than merge them dependence on the distance between thermal-box and
interpolation and then merge them dependence on the distance between thermal-box and
bed, width of sensor square and the FOV of Grideye sensor.
\begin{enumerate}
@ -51,8 +51,8 @@ distinguish the residual heat on bed and the person as Figure~\ref{fig:residual_
situation will slowly disappear after one or two minutes.
To determination the pose, first we use a median filter with a window size of five
to filter out the noise. Than, find the curve hull line of the upper bound and
lower bound of the data. Finally calculate the middle line of upper bound and lower bound.
to filter out the noise. Then, find the curve hull line of the upper bound and
lower bound of the data. Finally, calculate the middle line of upper bound and lower bound.
Figure~\ref{fig:trend} shows the data and these lines.
We divide every data into 10 second time windows. If the middle line of the time window


+ 2
- 2
trunk/RTCSA_SS/04Evaluation.tex View File

@ -27,7 +27,7 @@ For training the pose recognition model, we collect 200 images of lay on back an
of lay on right or left side. The result shows that the accuracy of single frame detection
can be improved about 5\% by SRCNN.
We let a person lay on bed and change his pose every minutes. The pose is repeating
lay on back, lay on left, lay on back and lay on right. Our method will output the currect pose
We let a person lay on bed and change his pose every minute. The pose is repeating
lay on back, lay on left, lay on back and lay on right. Our method will output the current pose
every 10 seconds and check if the pose changed. The accuracy of pose detection is 65\%, and the
turning over detection has 50\% recall rate and 83\% precision.

Loading…
Cancel
Save