Browse Source

git-svn-id: http://newslabx.csie.ntu.edu.tw/svn/Ginger@5 5747cdd2-2146-426f-b2b0-0570f90b98ed

master
Hobe 7 years ago
parent
commit
a88fc6a586
8 changed files with 76 additions and 85 deletions
  1. +3
    -1
      trunk/01Introduction.tex
  2. +7
    -7
      trunk/02Background.tex
  3. +4
    -4
      trunk/03Design.tex
  4. +14
    -12
      trunk/04Evaluation.tex
  5. +11
    -11
      trunk/Main.aux
  6. +37
    -50
      trunk/Main.log
  7. BIN
      trunk/Main.pdf
  8. BIN
      trunk/Main.synctex.gz

+ 3
- 1
trunk/01Introduction.tex View File

@ -1,6 +1,8 @@
\section{Introduction}
\label{sec:introduction}
Walking exercises the nervous, cardiovascular, pulmonary, musculoskeletal and hematologic systems because it requires more oxygen to contract the muscles. Hence, {\it gait velocity}, or called {\it walking speed}~\cite{Middleton2015}, has become a valid and important metric for senior populations~\cite{Middleton2015,studenski2011,Studenski03}.
In 2011, Studenski et al~\cite{studenski2011} published a study that tracked gait velocity of over 34,000 seniors from 6 to 21 years in US. The study found that predicted survival rate based on age, sex, and gait velocity was as accurate as predicted based on age, sex, chronic conditions, smoking history, blood pressure, body mass index, and hospitalization. Consequently, it has motivated the industrial and academia communities to develop the methodology to track and assess the risk based on gait velocity. The following years have led to many papers that point to the importance of gait velocity as a predictor of degradation and exacerbation events associated with various chronic diseases including heart failure, COPD, kidney failure, stroke, etc~\cite{Studenski03, pulignano2016, Konthoraxjnl2015, kutner2015}.
@ -17,7 +19,7 @@ Shih and his colleagues~\cite{Shih17b} proposed a sensing system to be installed
\label{fig:gaitVelocitySmartHome}
\end{figure}
In a IoT environment, many devices will periodically transmit data. Some sensor is use for avoid accidents, so they will have very high sensing frequency. However, most of the data are redundant. Like a temperature sensor on a gas stove, the temperature value is the same as the value from air conditioner and does not change very frequently, but it will have dramatically difference when we are cooking. We can simply make a threshold that when temperature is higher or lower than some degrees, the data will be transmitted, and drop the data that we don't interest. This is a very easy solution if we only have a few devices, but when we have hundreds or thousands devices, it is impossible to manually configure all devices, and the setting may need to change in the winter and summer, or different location.
Many sensors are needed for getting an accurate result. They will periodically send sensing data and let the host server make the final decision. Sensors have high sensing frequency to prevent unexpected events, and many redundant data were produced. For some normal sensors, e.g., temperature sensor on a gas stove, infared sensor for light control, $CO_2$ sensor for avoiding bad ventilation, we can make a threshold and only send the data that is higher or lower than the predefined threshold. This is a very easy solution if we only have a few devices, but when we have hundreds or thousands devices, it is impossible to manually configure all devices, and the setting may need to change in the winter and summer, or different location. Also, some slightly difference between useful data and redundant data will need more computing power or other information from nearby sensors to distinguish.
In this paper, we study the data from Panasonic Grid-EYE, a $8 \times 8$ pixels infrared array sensor, and FLIR ONE PRO, a $480 \times 640$ pixels thermal camera. Both are setting on ceiling and taking a video of a person walking under the camera.


+ 7
- 7
trunk/02Background.tex View File

@ -2,9 +2,9 @@
\label{sec:bk_related}
\subsection{Panasonic Grid-EYE Thermal Sensor}
First, we study the sensor Panasonic Grid-EYE which is a thermal camera that can output $8 \times 8$ pixels image with $2.5^\circ C$ accuracy and $0.25^\circ C$ resolution at $10$ frames per second. In normal mode, the current consumption is 4.5mA. It is a low resolution camera and infrared array sensor, so we install it in our house at ease without some privacy issue that may cause by a surveillance camera.
First, we study the sensor Panasonic Grid-EYE which is a thermal camera that can output $8 \times 8$ pixels images with $2.5^\circ C$ accuracy and $0.25^\circ C$ resolution at $10$ frames per second. In normal mode, the current consumption is 4.5mA. It is a low resolution camera and infrared array sensor, so we install it in our house at ease without some privacy issue that may cause by a surveillance camera.
When someone walks under a Grid-EYE sensor, we will see some pixels with higher temperature than others. Figure~\ref{fig:GridEye} shows an example of image from Grid-EYE sensor. The sensor value will look like a cone shape. The pixel with our head will have the highest temperature, body is lower, and leg is the lowest except background because when the distance from camera to our body is longer, the area cover by the camera will be wider and the ratio of background temperature in the pixel will increase, also our head does not cover by cloth, so the surface temperature will higher than other place. while we are walking in a area, the temperature of air in the area will become warmer, and the shape of human will be harder to recognize.
When someone walks under a Grid-EYE sensor, we will see some pixels with higher temperature than others. Figure~\ref{fig:GridEye} shows an example of image from Grid-EYE sensor. The sensor value will look like a cone shape. The pixel with our head will have the highest temperature, body is lower, and leg is the lowest except background because when the distance from camera to our body is longer, the area cover by the camera will be wider and the ratio of background temperature in the pixel will increase, also our head does not cover by cloth, so the surface temperature will higher than other place. While we are walking in an area, the temperature of air in the area will become warmer, and the shape of human will become harder to recognize.
\begin{figure}[htbp]
\centering
@ -13,11 +13,11 @@ When someone walks under a Grid-EYE sensor, we will see some pixels with higher
\label{fig:GridEye}
\end{figure}
The data we used is from a solitary elder's home. We deployed four Grid-EYE sensors at the corner of her living room, and recorded the thermal video for three weeks at $10$ frames per second data rate.
The data we used is from a solitary elder's home. We deployed four Grid-EYE sensors at the corner of her living room, and recorded the thermal video for three weeks at $10$ frames per second data rate, and the size of raw data is about 17.6GB.
\subsection{FLIR ONE PRO}
FLIR ONE PRO can output a $480 \times 640$ pixels image with $3^\circ C$ accuracy and $0.01^\circ C$ resolution, and capture video at about 5 FPS. In picture taking mode, it can retrieve the precise data from the header of picture file. However, in the video taking mode, it only store a gray scale video and show the range of temperature on the monitor. Hence, we use $^\circ C$ in picture mode, and gray scale value as the unit to analyze error rate. Since FLIR ONE PRO can offer a image with about $5000$ times number of pixels compare to Grid-EYE. It cannot simply use a Gaussian function to fit it. Hence, we developed a method to compress FLIR images. It can also treat as a normal image and be stored as jpeg, png, etc.
FLIR ONE PRO is a thermal camera that can output $480 \times 640$ pixels images with $3^\circ C$ accuracy and $0.01^\circ C$ resolution, and capture speed is about 5 frames per second. In picture taking mode, it can retrieve the precise data from the header of picture file. However, in the video taking mode, it only store a gray scale video and show the range of temperature on the monitor. Hence, we use the data from picture taking mode as our test object. The data form FLIR ONE PRO has about $5000$ times resolution compare to Grid-EYE. The shape of object is not just a cone. The temperature in a same object is similar, but an Obvious edge between different objects. Hence, we developed a method to compress FLIR images. It can also treat as a normal image and be stored as jpeg, png, etc.
\subsection{Raspberry Pi 3}
@ -25,14 +25,14 @@ We use Raspberry Pi 3 as our testing environment. It has a 1.2 GHz 64-bit quad-c
\subsection{Simple Data Compressing}
If we save a frame in a readable format, it will take about 380 bytes storage. However, the temperature range of our scenario mostly from $5^\circ C$ to $40^\circ C$ and the resolution is $0.25^\circ C$, so we can easily represent each temperature by one byte. Hence, we only need $64$ bytes to store a frame. We have try several ways to compress the frame.
If we store a frame from Grid-EYE in a readable format, it will take about 380 bytes storage. However, the temperature range of indoor environment mostly from $5^\circ C$ to $40^\circ C$ and the resolution of Grid-EYE is $0.25^\circ C$, so we can easily represent each temperature by one byte. Hence, we only need $64$ bytes to store a frame. We had tried several ways to compress the frame.
\subsubsection{Huffman Coding}
Huffman coding is a lossless data compressing. In average, it can reduce the frame size from $64$ bytes to $40.7$ bytes with $6$ bytes standard deviation.
\subsubsection{Z-score Threshold}
We can only transmit the pixels with higher temperature since thermal sensors are mostly used for detect heat source. Z-score is defined as $z = \frac{\chi - \mu}{\sigma}$, where $\chi$ is the value of the temperature, $\mu$ is the average of the temperature and $\sigma$ is the standard deviation of the temperature. In our earlier work~\cite{Shih17b}, we use Z-score instead of a static threshold to detect human because the background temperature may have a $10^\circ C$ difference between day and night, and when people walk through the sensing area the Grid-EYE, the temperature reading will only increase $2^\circ C$ to $3^\circ C$. Hence, it is impossible to use a static threshold to detect human. In~\cite{Shih17b}, we only use the pixels with the Z-score higher than $2$, so we can reduce the frame size from $64$ bytes to $12.6$ bytes with $2.9$ bytes standard deviation by Z-score threshold $2$ and compress by Huffman coding.
We can only send the pixels with higher temperature since thermal sensors are mostly used for detect heat source. Z-score is defined as $z = \frac{\chi - \mu}{\sigma}$ where $\chi$ is the value of the temperature, $\mu$ is the average of the temperature and $\sigma$ is the standard deviation of the temperature. In our earlier work~\cite{Shih17b}, we use Z-score instead of a static threshold to detect human because the background temperature may have a $10^\circ C$ difference between day and night, and when people walk through the sensing area the Grid-EYE, the temperature reading will only increase $2^\circ C$ to $3^\circ C$. Hence, it is impossible to use a static threshold to detect human. In~\cite{Shih17b}, the pixels with useful data only if the Z-score is higher than $2$, so we can reduce the frame size by dropping all pixels with Z-score lower than $2$. We can reduce the file size from $64$ bytes to $12.6$ bytes with $2.9$ bytes standard deviation by Z-score threshold $2$ and compress by Huffman coding.
\subsubsection{Gaussian Function Fitting}
Since the shape of human in a thermal image looks like a cone, we may use a Gaussian function to fit the image. A Gaussian function $y = Ae^{-(x-B)^2/2C^2}$ has three parameter $A$, $B$ and $C$. The parameter $A$ is the height of the cone, $B$ is the position of the cone's peak and $C$ controls the width of the cone. We let the pixel with highest temperature be the peak of the cone, so we only need to adjust $A$ and $C$ to fit the image. Guo~\cite{guo2011simple} offer a fast way to get the fitting Gaussian function. In our testing, it will be about $0.5^\circ C$ root-mean-square error, and only needs $5$ bytes to store the position of peak and two parameters.
Since the shape of human in a image from Grid-EYE looks like a cone, we may use a Gaussian function to fit the image. A Gaussian function $y = Ae^{-(x-B)^2/2C^2}$ has three parameter $A$, $B$ and $C$. The parameter $A$ is the height of the cone, $B$ is the position of the cone's peak and $C$ controls the width of the cone. We let the pixel with highest temperature be the peak of the cone, so we only need to adjust $A$ and $C$ to fit the image. Guo~\cite{guo2011simple} offer a fast way to get the fitting Gaussian function. In our testing, it will be about $0.5^\circ C$ root-mean-square error, and only needs $5$ bytes to store the position of peak and two parameters.

+ 4
- 4
trunk/03Design.tex View File

@ -1,7 +1,7 @@
\section{Data Size Decision Framework}
\label{sec:design}
This section presents the proposed method to outcome a data array than have less size compare to jpeg image when we can tolerate some error of data. We use the image captured by FLIR ONE PRO. In a thermal image, the temperature variation between nearby pixels are very small except the edge of objects. Hence, we can separate an image into several regions, and the pixels in a same region has similar value so we can use the average value to represent it and do not cause too much error. However, precisely separate an image into some polygon region takes a lot of computation time and hard to describe the edge of each region. Also, decide the number of region also a problem. Hence, to effectively describe regions we design that every region most be a rectangle, and every region can only separate into 4 regions by cut in half at the middle of horizontal and vertical. The image will start from only contains one region, and 3 regions will be added per round since we cut a region into 4 pieces.
This section presents the proposed method that can compress the image into a smaller size compare to jpeg image when we can tolerate some error of data. We use the image captured by FLIR ONE PRO. The temperature in a same object is similar, but an Obvious edge between different objects. Hence, we can separate an image into several regions, and the pixels in a same region has similar value so we can use the average value to represent it and do not cause too much error. However, precisely separate an image into some polygon region is very difficult, and need a lot of computation power. Also, decide the number of region also a problem. Hence, to effectively describe regions we design that every region most be a rectangle, and every region can only separate into 4 regions by cut in half at the middle of horizontal and vertical. The image will start from only contains one region, and 3 regions will be added per round since we cut a region into 4 pieces.
Our method is shown in Figure~\ref{fig:SystemArchitecture}. Data structure initialize only need to do once if the size of image doesn't change. A thermal image will be loaded into our data structure and separate into several regions. Finally, output data will be encoded by Huffman coding, and transmit to database. When users want to use the image, they can get the encoded data from database.
@ -24,7 +24,7 @@ For each frame, we can use a context-free language to represent it.
\end{center}
$R$ means a region of image, and it can either use the average $\alpha$ of the pixels in the region to represent whole region or separate into four regions and left a remainder $\beta$. Dependence on the image size we desired, we can choose the amount of separating regions.
The context-free grammar start from a region contain whole image. For each $R$ we calculate a score which is based on the quality of data we can improve by separate it in to smaller regions. After some operation, we can encode the image into a string $\omega$. Figure~\ref{fig:pngImage} shows an example of image which was took by FLIR ONE PRO. One of the possible outcome is Figure~\ref{fig:SeparateImage} if we separate the image 6 times and it will have 19 regions. By this method, we can iteratively separate an image until the number of regions reach our file size requirement.
The context-free grammar start from a region contain whole image. For each $R$ we calculate a score which is based on the data various in the region. After some operation, we can encode the image into a string $\omega$. Figure~\ref{fig:pngImage} shows an example of image which was took by FLIR ONE PRO. One of the possible outcome is Figure~\ref{fig:SeparateImage} if we separate the image 6 times and it will have 19 regions. By this method, we can iteratively separate an image until the number of regions reach our file size requirement.
\begin{figure}[ht]
@ -55,7 +55,7 @@ To help us choose which region to be separated, we give every region a score, an
\end{tabular}
\end{center}
By the equation shows above, we just need to know the sum of squared and the mean of a region, we can get its score. we can use a 4-dimension segment tree to store all possible regions and its scores. For each node, it store the range on both width and height it covered, sum $\sum\limits_{X\in R} X$, and squared sum $\sum\limits_{X\in R} X^2$ of pixels in the region. By the property of segment tree, tree root start from $0$, and each node $X_i$ has four child $X_{i\times 4+1}$, $X_{i\times 4+2}$, $X_{i\times 4+3}$ and $X_{i\times 4+4}$. Hence, we only need to allocate a large array and recursively process all nodes form root. Algorithm~\ref{code:SegmentTreePreprocess} shows how we generate the tree.
By the equation shows above, we just need to know the sum of squared and the mean of a region to calculate its score. We use a 4-dimension segment tree to store all possible regions and its scores. Since segment tree is a complete tree, the size of tree is less than 2 times the number if pixels. For each node of segment tree, it records the range on both width and height it covered, sum $\sum\limits_{X\in R} X$, and squared sum $\sum\limits_{X\in R} X^2$ of pixels in the region. By the property of segment tree, tree root start from $0$, and each node $X_i$ has four child $X_{i\times 4+1}$, $X_{i\times 4+2}$, $X_{i\times 4+3}$ and $X_{i\times 4+4}$. Hence, we only need to allocate a large array and recursively process all nodes form root. Algorithm~\ref{code:SegmentTreePreprocess} shows how we generate the tree.
\begin{algorithm*}[h]
\caption{Segment Tree Preprocess}
@ -80,7 +80,7 @@ By the equation shows above, we just need to know the sum of squared and the mea
\end{algorithmic}
\end{algorithm*}
For region selection, we use a priority queue to retrieve the region of considerate regions with highest score. The priority queue start with only root of the segment tree. For each round the priority queue pop the item with highest score and push all its child in to the queue. Algorithm~\ref{code:RegionSelection} shows how we select a region by the priority queue. After the selection finished, we will generate the data string to be sent. The regions in $seperatedRegions$ will be $\beta$ and others in $PriorityQueue$ will be the average value, and then compress the string by Huffman Coding.
For region selection, we use a priority queue to retrieve the region of considerate regions with highest score. The priority queue is made by heap, and start with only root of the segment tree. For each round the priority queue pop the item with highest score and push all its child in to the queue. Algorithm~\ref{code:RegionSelection} shows how we select a region by the priority queue. After the selection finished, we will generate the data string to be sent. The regions in $seperatedRegions$ will be $\beta$ and others in $PriorityQueue$ will be the average value, and then compress the string by Huffman Coding.
\begin{algorithm*}[h]
\caption{Region Selection}


+ 14
- 12
trunk/04Evaluation.tex View File

@ -1,22 +1,24 @@
\section{Performance Evaluation}
\label{sec:eval}
To evaluate the effectiveness of the proposed method, we do the different ratios of compressing on a thermal image by our method compared to JPEG image using different quality and png image, a lossless bit map image. We set the camera at the ceiling and view direction is perpendicular to the ground, and the image size is $480 \times 640$ pixels. The JPEG image is generated by OpenCV $3.3.0$, and image quality from $1$ to $99$.
To evaluate the effectiveness of the proposed method, we did the different ratios of compressing on a thermal image by our method compared to JPEG image using different quality and png image, a lossless bit map image. We set the camera at the ceiling and view direction is perpendicular to the ground, and the image size is $480 \times 640$ pixels. The JPEG image is generated by OpenCV $3.3.0$, and image quality from $1$ to $99$.
Figure~\ref{fig:4KMy} and Figure~\ref{fig:4KJpeg} show the different of JPEG and our method. JPEG image id generated by image quality level $3$, and image of our method does $1390$ rounds of separate and compressed by Huffman Coding. In this case, Huffman Coding can reduce $39\%$ of our image size.
\begin{figure}[ht]
\centering
\includegraphics[width=0.6\columnwidth]{figures/my4000.png}
\caption{4KB Image by Proposed Method}
\label{fig:4KMy}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.6\columnwidth]{figures/quality3.jpg}
\caption{4KB Image by JPEG}
\label{fig:4KJpeg}
\begin{minipage}[b]{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/my4000.png}
\caption{4KB Image by Proposed Method}
\label{fig:4KMy}
\end{minipage}
\hspace{0.05\linewidth}
\begin{minipage}[b]{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/quality3.jpg}
\caption{4KB Image by JPEG}
\label{fig:4KJpeg}
\end{minipage}
\end{figure}
Figure~\ref{fig:compareToJpeg} shows that the size of file can reduce more than $50\%$ compared to JPEG image when both have $0.5\% (0.18^\circ C)$ of root-mean-square error. Our method has $82\%$ less error rate when both size are $4KB$ image. The percentage of file size is compared to PNG image.


+ 11
- 11
trunk/Main.aux View File

@ -35,40 +35,40 @@
\newlabel{fig:GridEye}{{2}{4}}
\abx@aux@segm{0}{0}{Shih17b}
\abx@aux@segm{0}{0}{Shih17b}
\abx@aux@cite{guo2011simple}
\abx@aux@segm{0}{0}{guo2011simple}
\@writefile{toc}{\defcounter {refsection}{0}\relax }\@writefile{toc}{\contentsline {subsection}{\numberline {\unhbox \voidb@x \hbox {II-B}}FLIR ONE PRO}{5}}
\@writefile{toc}{\defcounter {refsection}{0}\relax }\@writefile{toc}{\contentsline {subsection}{\numberline {\unhbox \voidb@x \hbox {II-C}}Raspberry Pi 3}{5}}
\@writefile{toc}{\defcounter {refsection}{0}\relax }\@writefile{toc}{\contentsline {subsection}{\numberline {\unhbox \voidb@x \hbox {II-D}}Simple Data Compressing}{5}}
\@writefile{toc}{\defcounter {refsection}{0}\relax }\@writefile{toc}{\contentsline {subsubsection}{\numberline {\unhbox \voidb@x \hbox {II-D}1}Huffman Coding}{5}}
\@writefile{toc}{\defcounter {refsection}{0}\relax }\@writefile{toc}{\contentsline {subsubsection}{\numberline {\unhbox \voidb@x \hbox {II-D}2}Z-score Threshold}{5}}
\abx@aux@cite{guo2011simple}
\abx@aux@segm{0}{0}{guo2011simple}
\@writefile{toc}{\defcounter {refsection}{0}\relax }\@writefile{toc}{\contentsline {subsubsection}{\numberline {\unhbox \voidb@x \hbox {II-D}3}Gaussian Function Fitting}{6}}
\@writefile{toc}{\defcounter {refsection}{0}\relax }\@writefile{toc}{\contentsline {section}{\numberline {III}Data Size Decision Framework}{6}}
\newlabel{sec:design}{{III}{6}}
\@writefile{toc}{\defcounter {refsection}{0}\relax }\@writefile{toc}{\contentsline {subsection}{\numberline {\unhbox \voidb@x \hbox {III-A}}Region Represent Grammar}{6}}
\@writefile{lof}{\defcounter {refsection}{0}\relax }\@writefile{lof}{\contentsline {figure}{\numberline {3}{\ignorespaces System Architecture}}{7}}
\newlabel{fig:SystemArchitecture}{{3}{7}}
\@writefile{toc}{\defcounter {refsection}{0}\relax }\@writefile{toc}{\contentsline {subsection}{\numberline {\unhbox \voidb@x \hbox {III-B}}Data Structure and Region Selection Algorithm}{7}}
\@writefile{toc}{\defcounter {refsection}{0}\relax }\@writefile{toc}{\contentsline {subsection}{\numberline {\unhbox \voidb@x \hbox {III-A}}Region Represent Grammar}{7}}
\@writefile{lof}{\defcounter {refsection}{0}\relax }\@writefile{lof}{\contentsline {figure}{\numberline {4}{\ignorespaces PNG image, size = 46KB}}{8}}
\newlabel{fig:pngImage}{{4}{8}}
\@writefile{lof}{\defcounter {refsection}{0}\relax }\@writefile{lof}{\contentsline {figure}{\numberline {5}{\ignorespaces Region separate by CFG}}{8}}
\newlabel{fig:SeparateImage}{{5}{8}}
\@writefile{toc}{\defcounter {refsection}{0}\relax }\@writefile{toc}{\contentsline {subsection}{\numberline {\unhbox \voidb@x \hbox {III-B}}Data Structure and Region Selection Algorithm}{8}}
\@writefile{loa}{\defcounter {refsection}{0}\relax }\@writefile{loa}{\contentsline {algorithm}{\numberline {1}{\ignorespaces Segment Tree Preprocess}}{9}}
\newlabel{code:SegmentTreePreprocess}{{1}{9}}
\@writefile{loa}{\defcounter {refsection}{0}\relax }\@writefile{loa}{\contentsline {algorithm}{\numberline {2}{\ignorespaces Region Selection}}{9}}
\newlabel{code:RegionSelection}{{2}{9}}
\@writefile{toc}{\defcounter {refsection}{0}\relax }\@writefile{toc}{\contentsline {section}{\numberline {IV}Performance Evaluation}{9}}
\newlabel{sec:eval}{{IV}{9}}
\@writefile{toc}{\defcounter {refsection}{0}\relax }\@writefile{toc}{\contentsline {section}{\numberline {IV}Performance Evaluation}{10}}
\newlabel{sec:eval}{{IV}{10}}
\@writefile{lof}{\defcounter {refsection}{0}\relax }\@writefile{lof}{\contentsline {figure}{\numberline {6}{\ignorespaces 4KB Image by Proposed Method}}{10}}
\newlabel{fig:4KMy}{{6}{10}}
\@writefile{toc}{\defcounter {refsection}{0}\relax }\@writefile{toc}{\contentsline {subsubsection}{\numberline {\unhbox \voidb@x \hbox {IV-}1}Date Structure Initialize}{10}}
\@writefile{lof}{\defcounter {refsection}{0}\relax }\@writefile{lof}{\contentsline {figure}{\numberline {7}{\ignorespaces 4KB Image by JPEG}}{11}}
\newlabel{fig:4KJpeg}{{7}{11}}
\@writefile{lof}{\defcounter {refsection}{0}\relax }\@writefile{lof}{\contentsline {figure}{\numberline {7}{\ignorespaces 4KB Image by JPEG}}{10}}
\newlabel{fig:4KJpeg}{{7}{10}}
\@writefile{lof}{\defcounter {refsection}{0}\relax }\@writefile{lof}{\contentsline {figure}{\numberline {8}{\ignorespaces Proposed method and JPEG comparing}}{11}}
\newlabel{fig:compareToJpeg}{{8}{11}}
\@writefile{toc}{\defcounter {refsection}{0}\relax }\@writefile{toc}{\contentsline {subsubsection}{\numberline {\unhbox \voidb@x \hbox {IV-}1}Date Structure Initialize}{11}}
\@writefile{toc}{\defcounter {refsection}{0}\relax }\@writefile{toc}{\contentsline {subsubsection}{\numberline {\unhbox \voidb@x \hbox {IV-}2}Image Loading}{11}}
\@writefile{toc}{\defcounter {refsection}{0}\relax }\@writefile{toc}{\contentsline {subsubsection}{\numberline {\unhbox \voidb@x \hbox {IV-}3}Region Separation}{11}}
\newlabel{sec:conclusion}{{V}{11}}
\@writefile{toc}{\defcounter {refsection}{0}\relax }\@writefile{toc}{\contentsline {section}{\numberline {V}Conclusion}{11}}
\@writefile{lof}{\defcounter {refsection}{0}\relax }\@writefile{lof}{\contentsline {figure}{\numberline {8}{\ignorespaces Proposed method and JPEG comparing}}{12}}
\newlabel{fig:compareToJpeg}{{8}{12}}
\@writefile{lof}{\defcounter {refsection}{0}\relax }\@writefile{lof}{\contentsline {figure}{\numberline {9}{\ignorespaces Computation Time of Separate Regions}}{12}}
\newlabel{fig:computeTime}{{9}{12}}

+ 37
- 50
trunk/Main.log View File

@ -1,4 +1,4 @@
This is pdfTeX, Version 3.14159265-2.6-1.40.17 (MiKTeX 2.9.6210 64-bit) (preloaded format=pdflatex 2018.3.9) 14 MAR 2018 16:04
This is pdfTeX, Version 3.14159265-2.6-1.40.17 (MiKTeX 2.9.6210 64-bit) (preloaded format=pdflatex 2018.3.9) 15 MAR 2018 13:49
entering extended mode
**./Main.tex
(Main.tex
@ -917,56 +917,56 @@ LaTeX Font Info: External font `cmex10' loaded for size
)
(01Introduction.tex
LaTeX Warning: Citation 'Middleton2015' on page 1 undefined on input line 4.
LaTeX Warning: Citation 'Middleton2015' on page 1 undefined on input line 6.
LaTeX Warning: Citation 'Middleton2015' on page 1 undefined on input line 4.
LaTeX Warning: Citation 'Middleton2015' on page 1 undefined on input line 6.
LaTeX Warning: Citation 'studenski2011' on page 1 undefined on input line 4.
LaTeX Warning: Citation 'studenski2011' on page 1 undefined on input line 6.
LaTeX Warning: Citation 'Studenski03' on page 1 undefined on input line 4.
LaTeX Warning: Citation 'Studenski03' on page 1 undefined on input line 6.
Overfull \hbox (49.3852pt too wide) in paragraph at lines 4--5
Overfull \hbox (49.3852pt too wide) in paragraph at lines 6--7
\OT1/ptm/m/n/12 cause it re-quires more oxy-gen to con-tract the mus-cles. Henc
e, \OT1/ptm/m/it/12 gait ve-loc-ity\OT1/ptm/m/n/12 , or called \OT1/ptm/m/it/12
walk-ing speed \OT1/ptm/m/n/12 [[]],
[]
Overfull \hbox (23.1505pt too wide) in paragraph at lines 4--5
Overfull \hbox (23.1505pt too wide) in paragraph at lines 6--7
\OT1/ptm/m/n/12 has be-come a valid and im-por-tant met-ric for se-nior pop-u-l
a-tions [[], [], []].
[]
LaTeX Warning: Citation 'studenski2011' on page 1 undefined on input line 6.
LaTeX Warning: Citation 'studenski2011' on page 1 undefined on input line 8.
LaTeX Warning: Citation 'Studenski03' on page 1 undefined on input line 6.
LaTeX Warning: Citation 'Studenski03' on page 1 undefined on input line 8.
LaTeX Warning: Citation 'pulignano2016' on page 1 undefined on input line 6.
LaTeX Warning: Citation 'pulignano2016' on page 1 undefined on input line 8.
LaTeX Warning: Citation 'Konthoraxjnl2015' on page 1 undefined on input line 6.
LaTeX Warning: Citation 'Konthoraxjnl2015' on page 1 undefined on input line 8.
LaTeX Warning: Citation 'kutner2015' on page 1 undefined on input line 6.
LaTeX Warning: Citation 'kutner2015' on page 1 undefined on input line 8.
LaTeX Warning: Citation 'profile2015' on page 1 undefined on input line 8.
LaTeX Warning: Citation 'profile2015' on page 1 undefined on input line 10.
LaTeX Warning: Citation 'Peters2013' on page 1 undefined on input line 8.
LaTeX Warning: Citation 'Peters2013' on page 1 undefined on input line 10.
[1
Non-PDF special ignored!{C:/ProgramData/MiKTeX/2.9/pdftex/config/pdftex.map}]
LaTeX Warning: Citation 'Shih17b' on page 2 undefined on input line 12.
LaTeX Warning: Citation 'Shih17b' on page 2 undefined on input line 14.
@ -976,7 +976,7 @@ nd PDF version <1.7>, but at most version <1.5> allowed
File: figures/ThermalAtHome.pdf Graphic file (type pdf)
<use figures/ThermalAtHome.pdf>
Package pdftex.def Info: figures/ThermalAtHome.pdf used on input line 15.
Package pdftex.def Info: figures/ThermalAtHome.pdf used on input line 17.
(pdftex.def) Requested size: 516.0pt x 343.26186pt.
LaTeX Font Info: External font `cmex10' loaded for size
(Font) <12> on input line 22.
@ -988,7 +988,7 @@ LaTeX Font Info: External font `cmex10' loaded for size
pdfTeX warning: pdflatex (file ./figures/GridEye.pdf): PDF inclusion: found PDF
version <1.7>, but at most version <1.5> allowed
<figures/GridEye.pdf, id=49, 332.24126pt x 180.675pt>
<figures/GridEye.pdf, id=50, 332.24126pt x 180.675pt>
File: figures/GridEye.pdf Graphic file (type pdf)
<use figures/GridEye.pdf>
@ -1001,14 +1001,15 @@ LaTeX Warning: Citation 'Shih17b' on page 5 undefined on input line 34.
LaTeX Warning: Citation 'Shih17b' on page 5 undefined on input line 34.
[5]
LaTeX Warning: Citation 'guo2011simple' on page 5 undefined on input line 37.
LaTeX Warning: Citation 'guo2011simple' on page 6 undefined on input line 37.
[5]) (03Design.tex
) (03Design.tex
pdfTeX warning: pdflatex (file ./figures/SystemArchitecture.pdf): PDF inclusion
: found PDF version <1.7>, but at most version <1.5> allowed
<figures/SystemArchitecture.pdf, id=62, 578.16pt x 325.215pt>
<figures/SystemArchitecture.pdf, id=63, 578.16pt x 325.215pt>
File: figures/SystemArchitecture.pdf Graphic file (type pdf)
<use figures/SystemArchitecture.pdf>
@ -1030,38 +1031,32 @@ Package pdftex.def Info: figures/separate.png used on input line 40.
[7 <./figures/SystemArchitecture.pdf>]
** WARNING: \and is valid only when in conference or peerreviewca
modes (line 66).
LaTeX Warning: `h' float specifier changed to `ht'.
LaTeX Warning: `h' float specifier changed to `ht'.
) (04Evaluation.tex [8 <./figures/real.png (PNG copy)> <./figures/separate.png
(PNG copy)>] [9] <figures/my4000.png, id=87, 481.8pt x 642.4pt>
[8 <./figures/real.png (PNG copy)> <./figures/separate.png (PNG copy)>])
(04Evaluation.tex [9] <figures/my4000.png, id=87, 481.8pt x 642.4pt>
File: figures/my4000.png Graphic file (type png)
<use figures/my4000.png>
Package pdftex.def Info: figures/my4000.png used on input line 10.
(pdftex.def) Requested size: 309.60315pt x 412.81078pt.
Package pdftex.def Info: figures/my4000.png used on input line 11.
(pdftex.def) Requested size: 232.19843pt x 309.59338pt.
<figures/quality3.jpg, id=88, 481.8pt x 642.4pt>
File: figures/quality3.jpg Graphic file (type jpg)
<use figures/quality3.jpg>
Package pdftex.def Info: figures/quality3.jpg used on input line 17.
(pdftex.def) Requested size: 309.60315pt x 412.81078pt.
<figures/compareToJpeg.pdf, id=89, 642.4pt x 385.44pt>
Package pdftex.def Info: figures/quality3.jpg used on input line 18.
(pdftex.def) Requested size: 232.19843pt x 309.59338pt.
[10 <./figures/my4000.png (PNG copy)> <./figures/quality3.jpg>] <figures/compa
reToJpeg.pdf, id=92, 642.4pt x 385.44pt>
File: figures/compareToJpeg.pdf Graphic file (type pdf)
<use figures/compareToJpeg.pdf>
Package pdftex.def Info: figures/compareToJpeg.pdf used on input line 26.
Package pdftex.def Info: figures/compareToJpeg.pdf used on input line 28.
(pdftex.def) Requested size: 516.0pt x 309.61102pt.
[10 <./figures/my4000.png (PNG copy)>]
<figures/computeTime.pdf, id=93, 642.4pt x 385.44pt>
File: figures/computeTime.pdf Graphic file (type pdf)
<use figures/computeTime.pdf>
Package pdftex.def Info: figures/computeTime.pdf used on input line 41.
Package pdftex.def Info: figures/computeTime.pdf used on input line 43.
(pdftex.def) Requested size: 516.0pt x 309.61102pt.
) (05Conclusion.tex) (06Acknowledge.tex)
@ -1070,19 +1065,11 @@ LaTeX Warning: Empty bibliography on input line 143.
\svn@write=\write7
\openout7 = `Main.svn'.
[11 <./figures/quality3.jpg>] [12 <./figures/compareToJpeg.pdf> <./figures/comp
uteTime.pdf
pdfTeX warning: pdflatex (file ./figures/computeTime.pdf): PDF inclusion: multi
ple pdfs with page group included in a single page
>] (Main.aux)
[11 <./figures/compareToJpeg.pdf>] [12 <./figures/computeTime.pdf>] (Main.aux)
LaTeX Warning: There were undefined references.
LaTeX Warning: Label(s) may have changed. Rerun to get cross-references right.
Package biblatex Warning: Please (re)run Biber on the file:
(biblatex) Main
(biblatex) and rerun LaTeX afterwards.
@ -1094,11 +1081,11 @@ Package logreq Info: Writing requests to 'Main.run.xml'.
Here is how much of TeX's memory you used:
22513 strings out of 493333
425347 string characters out of 3139189
929148 words of memory out of 3000000
928149 words of memory out of 3000000
25622 multiletter control sequences out of 15000+200000
37218 words of font info for 72 fonts, out of 3000000 for 9000
1141 hyphenation exceptions out of 8191
64i,8n,56p,1077b,1282s stack positions out of 5000i,500n,10000p,200000b,50000s
64i,8n,56p,1104b,1282s stack positions out of 5000i,500n,10000p,200000b,50000s
{C:/Program Files/MiKTeX 2.9/fonts/enc/dvips/base/8r.enc}<C:/Program Files/Mi
KTeX 2.9/fonts/type1/public/amsfonts/cm/cmex10.pfb><C:/Program Files/MiKTeX 2.9
/fonts/type1/public/amsfonts/cm/cmmi10.pfb><C:/Program Files/MiKTeX 2.9/fonts/t
@ -1112,9 +1099,9 @@ KTeX 2.9/fonts/type1/public/amsfonts/cm/cmsy8.pfb><C:/Program Files/MiKTeX 2.9/
fonts/type1/urw/times/utmb8a.pfb><C:/Program Files/MiKTeX 2.9/fonts/type1/urw/t
imes/utmr8a.pfb><C:/Program Files/MiKTeX 2.9/fonts/type1/urw/times/utmri8a.pfb>
Output written on Main.pdf (12 pages, 648352 bytes).
Output written on Main.pdf (12 pages, 648701 bytes).
PDF statistics:
157 PDF objects out of 1000 (max. 8388607)
158 PDF objects out of 1000 (max. 8388607)
0 named destinations out of 1000 (max. 500000)
64 words of extra memory for PDF output out of 10000 (max. 10000000)

BIN
trunk/Main.pdf View File


BIN
trunk/Main.synctex.gz View File


Loading…
Cancel
Save