You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

90 lines
5.0 KiB

  1. \section{Background and Related Works}
  2. \label{sec:bk_related}
  3. This section describes background of Super-resolution technique, and the sensors
  4. we are using in this work. The related works describes the works that using
  5. thermal image to recognize activity.
  6. \subsection{Background}
  7. \paragraph{Super-Resolution Convolutional Neural Network}
  8. C. Dong et al.~\cite{ChaoDong16} proposed a deep learning method for single image
  9. super-resolution (SR). Their method learns how to directly map the low-resolution
  10. image to high-resolution image. They show that the traditional sparse coding
  11. based SR methods can be reformulated into a deep convolutional neural network.
  12. SRCNN consists three operations, illustrated in Figure~\ref{fig:SRCNN_model}.
  13. The first layer of SRCNN model extracts the patches from low-resolution image.
  14. The second layer maps the patches from low-resolution to high-resolution. The
  15. third layer will reconstruct the high-resolution image by the high-resolution
  16. patches.
  17. \begin{figure}[htb]
  18. \begin{center}
  19. \includegraphics[width=1\linewidth]{figures/SRCNN_model.pdf}
  20. \caption{Illustration of super-resolution convolutional neural network (SRCNN) model~\cite{ChaoDong16}.}
  21. \label{fig:SRCNN_model}
  22. \end{center}
  23. \end{figure}
  24. \subsection{Thermal cameras}
  25. In this work, we use two different resolution thermal camera to play the role
  26. of low-resolution and high-resolution camera. For low-resolution camera, we use
  27. Grid-EYE thermal camera. Grid-EYE is a thermal camera that can output
  28. $8 \times 8$ pixels thermal data with $2.5^\circ C$ accuracy and $0.25^\circ C$
  29. resolution at $10$ fps. For the high-resolution one, we use Lepton 3. The
  30. specification of them are shown in Table~\ref{table:specification of devices}.
  31. \begin{table}[hb]
  32. \centering
  33. \footnotesize
  34. \begin{tabular}{|p{3cm}<{\centering}|p{2cm}<{\centering}|p{2.5cm}<{\centering}|}
  35. \hline
  36. Specification & Grid-EYE & Lepton 3\\
  37. \hline
  38. Resolution & 64 pixels (8x8) & 120x160\\
  39. \hline
  40. FOV-horizontal & $60^{\circ}$ & $49.5^{\circ}$\\
  41. \hline
  42. FOV-vertical & $60^{\circ}$ & $61.8^{\circ}$\\
  43. \hline
  44. Frame rate & 10Hz & 8.7Hz\\
  45. \hline
  46. Detect temperature range & $-20^{\circ}$C to $80^{\circ}$C & $0^{\circ}$C to $120^{\circ}$C\\
  47. \hline
  48. Output format & Absolute temperature (Celsius) & 14-bits value which is relative to camera temperature\\
  49. \hline
  50. Temperature accuracy & $\pm2.5^{\circ}$C & -\\
  51. \hline
  52. Temperature resolution & $0.25^{\circ}$C & -\\
  53. \hline
  54. Bits per pixel & 12 & 14\\
  55. \hline
  56. Data Rate & 7.68 kbps & 2.34 mbps\\
  57. \hline
  58. \end{tabular}
  59. \caption{Specification of Grid-EYE~\cite{grideye_datasheet} and Lepton 3~\cite{lepton_datasheet}.}
  60. \label{table:specification of devices}
  61. \end{table}
  62. \subsection{Related Works}
  63. X. Chen et al.~\cite{7805509} proposed to use the visible camera as a guidance to super resolution the IR images. They proposed the IR-RBG multi-sensor imaging system. The approach bases on the fact that RGB channels have different correlation with IR images. To avoid wrong texture transfer, the method used cross correlation to find the proper matching between the region of IR image and RGB image. Next, the method applied sub-pixel estimation and guided filtering to remove the noise and discontinuities in IR image. At the last step of the iteration, they used truncated quadric model as the cost function to terminate the algorithm. After the iteration, the method fine tuned the output IR image with outlier detection to remove the black points which only appear at the edge of IR image.
  64. % \begin{figure}[ht]
  65. % \begin{center}
  66. % \includegraphics[width=0.7\linewidth]{figures/color_guided_SR.pdf}
  67. % \caption{Flow chart of IR-Color multi-sensor imaging system.}
  68. % \label{fig:color_guided_SR}
  69. % \end{center}
  70. % \end{figure}
  71. F. Almsari et al.~\cite{8713356} applied Generative Adversarial Network (GAN) to the super-resoltuion problem called {\it TIGAN}. TIGAN consists of two networks, generator and discriminator, which learn from each other with zero-sum games and try to find an equilibrium state. The generator was trained to transfer low-resolution thermal image into super-resolution thermal image domain. The generated image should be similar to its ground truth high-resolution image domain. While the discriminator was trained to discriminate between generated image and ground truth high-resolution image. Compared to SRCNN model, it preserves the high-frequency details and gave shaper textures.
  72. % \begin{figure}[hb]
  73. % \begin{center}
  74. % \includegraphics[width=0.7\linewidth]{figures/GAN_SR.pdf}
  75. % \caption{Network architecture of TIGAN.}
  76. % \label{fig:GAN_SR}
  77. % \end{center}
  78. % \end{figure}
  79. % % background
  80. The assumption on the resolution of thermal images in above works is at least $120 \times 160$. Compared to the $8 \times 8$ thermal images collected by Grid-EYE sensors, more thermal features are available. In addition, these works use RGB images as reference to train the model. In this work, RGB camera will not be used so as to protect privacy.