Browse Source

git-svn-id: http://newslabx.csie.ntu.edu.tw/svn/Ginger@65 5747cdd2-2146-426f-b2b0-0570f90b98ed

master
Hobe 5 years ago
parent
commit
64728f6cc7
10 changed files with 49 additions and 17 deletions
  1. +12
    -13
      trunk/RTCSA_SS/03Design.tex
  2. +27
    -1
      trunk/RTCSA_SS/04Evaluation.tex
  3. +7
    -1
      trunk/RTCSA_SS/05Conclusion.tex
  4. +2
    -2
      trunk/RTCSA_SS/Main.tex
  5. BIN
      trunk/RTCSA_SS/figures/HR.png
  6. BIN
      trunk/RTCSA_SS/figures/LR.png
  7. BIN
      trunk/RTCSA_SS/figures/Lepton_residual_heat.bmp
  8. BIN
      trunk/RTCSA_SS/figures/Lepton_residual_heat.png
  9. BIN
      trunk/RTCSA_SS/figures/SR.png
  10. +1
    -0
      trunk/RTCSA_SS/figures/coverart.eps

+ 12
- 13
trunk/RTCSA_SS/03Design.tex View File

@ -36,13 +36,13 @@ to two categories. One is lay on back and the other is lay on side. Since the in
data is very small, we use a neural network consist one 2D convolution layer, one
2D max pooling, one flatten and one densely-connected layer. The possibility of
output has a very large various just after turning over because the model cannot
distinguish the residual heat on bed and the person as Figure~\ref{fig:pose}(a) shown. This
distinguish the residual heat on bed and the person as Figure~\ref{fig:residual_heat} shown. This
situation will slowly disappear after one or two minutes.
To determination the pose, first we use a median filter with a window size of five
to filter out the noise. Than, find the curve hull line of the upper bound and
lower bound of the data. Finally calculate the middle line of upper bound and lower bound.
Figure~\ref{fig:pose}(b) and (c) shows the data and these lines.
Figure~\ref{fig:trend} shows the data and these lines.
We divide every data into 10 second time windows. If the middle line of the time window
is at the top one fifth, it is a lay on back. If it is at the bottom one fifth,
@ -52,15 +52,14 @@ there are three continuously same output.
\begin{figure}[ht]
\centering
\subfloat[Residual heat on bed]{
\includegraphics[width=0.3\columnwidth]{figures/Lepton_residual_heat.bmp}
}
\subfloat[Enhanced Images after Background Calibration]{
\includegraphics[width=0.3\columnwidth]{figures/MinMax.pdf}
}
\subfloat[Enhanced Images after Background Calibration]{
\includegraphics[width=0.3\columnwidth]{figures/Mid.pdf}
}
\caption{Background subtraction.}
\label{fig:pose}
\minipage{0.3\columnwidth}
\includegraphics[width=\linewidth]{figures/Lepton_residual_heat.png}
\caption{Residual heat on bed.}
\label{fig:residual_heat}
\endminipage
\minipage{0.65\columnwidth}
\includegraphics[width=\linewidth]{figures/MinMax.pdf}
\caption{Trend of pose.}
\label{fig:trend}
\endminipage
\end{figure}

+ 27
- 1
trunk/RTCSA_SS/04Evaluation.tex View File

@ -1,4 +1,30 @@
\section{Performance Evaluation}
\label{sec:eval}
Evaluation.
This section presents the evaluation results for the proposed method, and
how we collect the dataset.
For training SRCNN model, we let a person lay on bed and randomly change pose
or move his arms and legs. Collect the about 600 images from Grideye sensors and
Lepton 3, and align them at the same timestamps. Figure~\ref{fig:resolution_compare} shows the result of SRCNN model.
\begin{figure}[ht]
\centering
\subfloat[Grideye Image]{
\includegraphics[width=0.32\columnwidth]{figures/LR.png}
}
\subfloat[SR Image]{
\includegraphics[width=0.32\columnwidth]{figures/SR.png}
}
\subfloat[Downscaled Lepton Image]{
\includegraphics[width=0.32\columnwidth]{figures/HR.png}
}
\caption{Result of SRCNN}
\label{fig:resolution_compare}
\end{figure}
For training the pose recognization model, we collect 200 images of lay on back and 400 images
of lay on right or left side. The result shows that the accuracy of single frame detection
can be improved about 5\% by SRCNN.
The accuracy of pose detection is about 67\% and turning over datection is 56\%.

+ 7
- 1
trunk/RTCSA_SS/05Conclusion.tex View File

@ -1,3 +1,9 @@
\section{Conclusion\label{sec:conclusion}}
Conclusion
In this paper, we use the SRCNN to improve the resolution of thermal sensor, and
detect the pose of each frame.
The result shows that Super-resolution can slightly improve the accuracy of pose
detection. We develop a method to detect the turning over. It has about 67\% accuracy
even when the accuracy of pose detection is only ???\%.

+ 2
- 2
trunk/RTCSA_SS/Main.tex View File

@ -13,7 +13,6 @@
%\usepackage{ntu_techrpt_cover}
%\usepackage{lipsum}
\usepackage{graphicx}
\usepackage{times}
%\usepackage{psfrag}
%\usepackage[tight]{subfigure}
@ -22,7 +21,8 @@
%\usepackage{epsfig}
\usepackage{longtable}
%\usepackage{cases}
%\usepackage{subfig}
\usepackage{subfig}
\usepackage{graphicx}
\usepackage{balance}
\usepackage{xcolor}
%\usepackage{algorithm}


BIN
trunk/RTCSA_SS/figures/HR.png View File

Before After
Width: 30  |  Height: 40  |  Size: 892 B

BIN
trunk/RTCSA_SS/figures/LR.png View File

Before After
Width: 30  |  Height: 40  |  Size: 365 B

BIN
trunk/RTCSA_SS/figures/Lepton_residual_heat.bmp View File

Before After

BIN
trunk/RTCSA_SS/figures/Lepton_residual_heat.png View File

Before After
Width: 30  |  Height: 40  |  Size: 919 B

BIN
trunk/RTCSA_SS/figures/SR.png View File

Before After
Width: 30  |  Height: 39  |  Size: 775 B

+ 1
- 0
trunk/RTCSA_SS/figures/coverart.eps View File

@ -0,0 +1 @@
/Users/cshih/notes/tex_config/figures/coverart.eps

Loading…
Cancel
Save