Browse Source

git-svn-id: http://newslabx.csie.ntu.edu.tw/svn/Ginger@4 5747cdd2-2146-426f-b2b0-0570f90b98ed

master
Hobe 7 years ago
parent
commit
e0fd8bc05c
13 changed files with 96 additions and 71 deletions
  1. +1
    -1
      trunk/00Abstract.tex
  2. +4
    -4
      trunk/02Background.tex
  3. +13
    -4
      trunk/03Design.tex
  4. +3
    -3
      trunk/04Evaluation.tex
  5. +23
    -21
      trunk/Main.aux
  6. +0
    -1
      trunk/Main.bcf
  7. +51
    -35
      trunk/Main.log
  8. BIN
      trunk/Main.pdf
  9. +0
    -1
      trunk/Main.run.xml
  10. BIN
      trunk/Main.synctex.gz
  11. +1
    -1
      trunk/Main.tex
  12. BIN
      trunk/Slide/Parameterized Data Reduction Framework for IoT Devices.pptx
  13. BIN
      trunk/figures/SystemArchitecture.pdf

+ 1
- 1
trunk/00Abstract.tex View File

@ -1,5 +1,5 @@
\begin{abstract}
In a IoT environment, many devices will periodically transmit data. However, most of the data are redundant, but sensor itself may not have a good standard to decide transmit or not. Some static rule maybe useful on specific scenario, and become ineffective when we change the usage of the sensor. Hence, we design an algorithm to solve the problem of data redundant. In the algorithm, we iteratively separate an image into some smaller regions. Each round, choose a region with highest variability, and separate it into four regions. Finally, each region has different size and uses its average value to represent itself. If a area is more various, the density of regions will be higher. In this paper, we present a method to reduce the file size of thermal sensor which can sense the temperature of a surface and outputs a two dimension gray scale image. In our evaluation result, we can reduce the file size to $50\%$ less than JPEG when there is $0.5\%$ of distortion, and up to $93\%$ less when there is $2\%$ of distortion.
In a IoT environment, many devices will periodically transmit data. However, most of the data are redundant, but sensor itself may not have a good standard to decide send or not. Some static rule maybe useful on specific scenario, and become ineffective when we change the usage of the sensor. Hence, we design an algorithm to solve the problem of data redundant. In the algorithm, we iteratively separate an image into some smaller regions. Each round, choose a region with highest variability, and separate it into four regions. Finally, each region has different size and uses its average value to represent itself. If an area is more various, the density of regions will be higher. In this paper, we present a method to reduce the file size of thermal sensor which can sense the temperature of a surface and outputs a two dimension gray scale image. In our evaluation result, we can reduce the file size to $50\%$ less than JPEG when $0.5\%$ of distortion, and up to $93\%$ less when there is $2\%$ of distortion.
\end{abstract}

+ 4
- 4
trunk/02Background.tex View File

@ -4,7 +4,7 @@
\subsection{Panasonic Grid-EYE Thermal Sensor}
First, we study the sensor Panasonic Grid-EYE which is a thermal camera that can output $8 \times 8$ pixels image with $2.5^\circ C$ accuracy and $0.25^\circ C$ resolution at $10$ frames per second. In normal mode, the current consumption is 4.5mA. It is a low resolution camera and infrared array sensor, so we install it in our house at ease without some privacy issue that may cause by a surveillance camera.
When someone walks under a Grid-EYE sensor, we will see some pixels with higher temperature than others. Figure~\ref{fig:GridEye} shows an example of image from Grid-EYE sensor. The sensor value will look like a cone shape. The pixel with our head will have the highest temperature, body is lower, and leg is the lowest except background because when the distance from camera to our body is longer, the area cover by the camera will be wider and the ratio of background temperature in the pixel will increase, also our head do not cover by cloth, so the surface temperature will higher than other place. while we are walking in a area, the temperature of air in the area will become warmer, and the shape of human will be harder to recognize.
When someone walks under a Grid-EYE sensor, we will see some pixels with higher temperature than others. Figure~\ref{fig:GridEye} shows an example of image from Grid-EYE sensor. The sensor value will look like a cone shape. The pixel with our head will have the highest temperature, body is lower, and leg is the lowest except background because when the distance from camera to our body is longer, the area cover by the camera will be wider and the ratio of background temperature in the pixel will increase, also our head does not cover by cloth, so the surface temperature will higher than other place. while we are walking in a area, the temperature of air in the area will become warmer, and the shape of human will be harder to recognize.
\begin{figure}[htbp]
\centering
@ -13,7 +13,7 @@ When someone walks under a Grid-EYE sensor, we will see some pixels with higher
\label{fig:GridEye}
\end{figure}
The data we used is from a solitary elder's home. We deployed four Grid-EYE sensor at the corner of her living room, and recorded the thermal video for three weeks at $10$ frames per second data rate.
The data we used is from a solitary elder's home. We deployed four Grid-EYE sensors at the corner of her living room, and recorded the thermal video for three weeks at $10$ frames per second data rate.
\subsection{FLIR ONE PRO}
@ -31,8 +31,8 @@ If we save a frame in a readable format, it will take about 380 bytes storage. H
Huffman coding is a lossless data compressing. In average, it can reduce the frame size from $64$ bytes to $40.7$ bytes with $6$ bytes standard deviation.
\subsubsection{Z-score Threshold}
We can only transmit the pixels with higher temperature since thermal sensors are mostly used for detect heat source. Z-score is define as $z = \frac{\chi - \mu}{\sigma}$, where $\chi$ is the value of the temperature, $\mu$ is the average of the temperature and $\sigma$ is the standard deviation of the temperature. In our earlier work~\cite{Shih17b}, we use Z-score instead of a static threshold to detect human because the background temperature may have a $10^\circ C$ difference between day and night, and when people walk through the sensing area the Grid-EYE, the temperature reading will only increase $2^\circ C$ to $3^\circ C$. Hence, it is impossible to use a static threshold to detect human. In~\cite{Shih17b}, we only use the pixels with the Z-score higher than $2$, so we can reduce the frame size from $64$ bytes to $12.6$ bytes with $2.9$ bytes standard deviation by Z-score threshold $2$ and compress by Huffman coding.
We can only transmit the pixels with higher temperature since thermal sensors are mostly used for detect heat source. Z-score is defined as $z = \frac{\chi - \mu}{\sigma}$, where $\chi$ is the value of the temperature, $\mu$ is the average of the temperature and $\sigma$ is the standard deviation of the temperature. In our earlier work~\cite{Shih17b}, we use Z-score instead of a static threshold to detect human because the background temperature may have a $10^\circ C$ difference between day and night, and when people walk through the sensing area the Grid-EYE, the temperature reading will only increase $2^\circ C$ to $3^\circ C$. Hence, it is impossible to use a static threshold to detect human. In~\cite{Shih17b}, we only use the pixels with the Z-score higher than $2$, so we can reduce the frame size from $64$ bytes to $12.6$ bytes with $2.9$ bytes standard deviation by Z-score threshold $2$ and compress by Huffman coding.
\subsubsection{Gaussian Function Fitting}
Since the shape of human in a thermal image looks like a cone, we may use a gaussian function to fit the image. A Gaussian function $y = Ae^{-(x-B)^2/2C^2}$ has three parameter $A, B and C$. The parameter $A$ is the height of the cone, $B$ is the position of the cone's peak and $C$ controls the width of the cone. We let the pixel with highest temperature be the peak of the cone, so we only need to adjust $A and C$ to fit the image. Guo~\cite{guo2011simple} provide a fast way to get the fitting Gaussian function. In our testing, it will be about $0.5^\circ C$ root-mean-square error, and only needs $5$ bytes to store the position of peak and two parameters.
Since the shape of human in a thermal image looks like a cone, we may use a Gaussian function to fit the image. A Gaussian function $y = Ae^{-(x-B)^2/2C^2}$ has three parameter $A$, $B$ and $C$. The parameter $A$ is the height of the cone, $B$ is the position of the cone's peak and $C$ controls the width of the cone. We let the pixel with highest temperature be the peak of the cone, so we only need to adjust $A$ and $C$ to fit the image. Guo~\cite{guo2011simple} offer a fast way to get the fitting Gaussian function. In our testing, it will be about $0.5^\circ C$ root-mean-square error, and only needs $5$ bytes to store the position of peak and two parameters.

+ 13
- 4
trunk/03Design.tex View File

@ -3,6 +3,15 @@
This section presents the proposed method to outcome a data array than have less size compare to jpeg image when we can tolerate some error of data. We use the image captured by FLIR ONE PRO. In a thermal image, the temperature variation between nearby pixels are very small except the edge of objects. Hence, we can separate an image into several regions, and the pixels in a same region has similar value so we can use the average value to represent it and do not cause too much error. However, precisely separate an image into some polygon region takes a lot of computation time and hard to describe the edge of each region. Also, decide the number of region also a problem. Hence, to effectively describe regions we design that every region most be a rectangle, and every region can only separate into 4 regions by cut in half at the middle of horizontal and vertical. The image will start from only contains one region, and 3 regions will be added per round since we cut a region into 4 pieces.
Our method is shown in Figure~\ref{fig:SystemArchitecture}. Data structure initialize only need to do once if the size of image doesn't change. A thermal image will be loaded into our data structure and separate into several regions. Finally, output data will be encoded by Huffman coding, and transmit to database. When users want to use the image, they can get the encoded data from database.
\begin{figure}[htbp]
\centering
\includegraphics[width=\columnwidth]{figures/SystemArchitecture.pdf}
\caption{System Architecture}
\label{fig:SystemArchitecture}
\end{figure}
\subsection{Region Represent Grammar}
For each frame, we can use a context-free language to represent it.
@ -19,14 +28,14 @@ The context-free grammar start from a region contain whole image. For each $R$ w
\begin{figure}[ht]
\begin{minipage}[b]{0.5\linewidth}
\begin{minipage}[b]{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/real.png}
\caption{PNG image, size = 46KB}
\label{fig:pngImage}
\end{minipage}
\hspace{0.05\linewidth}
\begin{minipage}[b]{0.5\linewidth}
\begin{minipage}[b]{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/separate.png}
\caption{Region separate by CFG}
@ -46,7 +55,7 @@ To help us choose which region to be separated, we give every region a score, an
\end{tabular}
\end{center}
By the equation shows above, we just need to know the sum of squared and the mean of a region, we can get its score. we can use a segment tree to store all possible regions and its scores. For each node, it store the range on both width and height it covered, sum $\sum\limits_{X\in R} X$, and squared sum $\sum\limits_{X\in R} X^2$ of pixels in the region. By the property of segment tree, tree root start from $0$, and each node $X_i$ has four child $X_{i\times 4+1}$, $X_{i\times 4+2}$, $X_{i\times 4+3}$ and $X_{i\times 4+4}$. Hence, we only need to allocate an large array and recursively process all nodes form root. Algorithm~\ref{code:SegmentTreePreprocess} shows how we generate the tree.
By the equation shows above, we just need to know the sum of squared and the mean of a region, we can get its score. we can use a 4-dimension segment tree to store all possible regions and its scores. For each node, it store the range on both width and height it covered, sum $\sum\limits_{X\in R} X$, and squared sum $\sum\limits_{X\in R} X^2$ of pixels in the region. By the property of segment tree, tree root start from $0$, and each node $X_i$ has four child $X_{i\times 4+1}$, $X_{i\times 4+2}$, $X_{i\times 4+3}$ and $X_{i\times 4+4}$. Hence, we only need to allocate a large array and recursively process all nodes form root. Algorithm~\ref{code:SegmentTreePreprocess} shows how we generate the tree.
\begin{algorithm*}[h]
\caption{Segment Tree Preprocess}
@ -91,4 +100,4 @@ For region selection, we use a priority queue to retrieve the region of consider
\end{algorithmic}
\end{algorithm*}
The complexity of our algorithm can be separated into 3 parts. First part is to initialize the segment tree. The size of segment is depends on the size of the image. If the number of pixels is $N$, the height of segment tree is $O(Nlog(N))$, and the number of nodes will be $O(N)$. The time complexity of initialize is $O(N)$. Second part is loading the image. It will need to traverse whole tree from leaf to root. Since segment tree can be store in an array, it also takes $O(N)$ time to load the image. Third part is to separate regions. For each round, we pop an element from heap and push four elements into heap. If we have separated image $K$ times, the size of heap will be $3K+1$. Time complexity of pop and push will be $O(log(K))$, and do it $5K$ times will be $O(Klog(K))$.
The complexity of our algorithm can be separated into 3 parts. First part is to initialize the segment tree. The size of segment is depends on the size of the image. If the number of pixels is $N$, the height of segment tree is $O(Nlog(N))$, and the number of nodes will be $O(N)$. The time complexity of initialize is $O(N)$. Second part is loading the image. It will need to traverse whole tree from leaf to root. Since segment tree can be stored in an array, it also takes $O(N)$ time to load the image. Third part is to separate regions. For each round, we pop an element from heap and push four elements into heap. If we have separated image $K$ times, the size of heap will be $3K+1$. Time complexity of pop and push will be $O(log(K))$, and do it $5K$ times will be $O(Klog(K))$.

+ 3
- 3
trunk/04Evaluation.tex View File

@ -1,7 +1,7 @@
\section{Performance Evaluation}
\label{sec:eval}
To evaluate the effectiveness of the proposed method, we do the different ratios of compressing on a thermal image by our method compare to JPEG image using different quality and png image, a lossless bit map image. We set the camera at the ceiling and view direction is perpendicular to the ground, and the image size is $480 \times 640$ pixels. The JPEG image is generated by OpenCV $3.3.0$, and image quality from $1$ to $99$.
To evaluate the effectiveness of the proposed method, we do the different ratios of compressing on a thermal image by our method compared to JPEG image using different quality and png image, a lossless bit map image. We set the camera at the ceiling and view direction is perpendicular to the ground, and the image size is $480 \times 640$ pixels. The JPEG image is generated by OpenCV $3.3.0$, and image quality from $1$ to $99$.
Figure~\ref{fig:4KMy} and Figure~\ref{fig:4KJpeg} show the different of JPEG and our method. JPEG image id generated by image quality level $3$, and image of our method does $1390$ rounds of separate and compressed by Huffman Coding. In this case, Huffman Coding can reduce $39\%$ of our image size.
@ -19,7 +19,7 @@ Figure~\ref{fig:4KMy} and Figure~\ref{fig:4KJpeg} show the different of JPEG and
\label{fig:4KJpeg}
\end{figure}
Figure~\ref{fig:compareToJpeg} shows that the size of file can reduce more than $50\%$ compare to JPEG image when both have $0.5\% (0.18^\circ C)$ of root-mean-square error. Our method has $82\%$ less error rate when both size are $4KB$ image. The percentage of file size is compare to PNG image.
Figure~\ref{fig:compareToJpeg} shows that the size of file can reduce more than $50\%$ compared to JPEG image when both have $0.5\% (0.18^\circ C)$ of root-mean-square error. Our method has $82\%$ less error rate when both size are $4KB$ image. The percentage of file size is compared to PNG image.
\begin{figure}[ht]
\centering
@ -32,7 +32,7 @@ The computing time of a $480 \times 640$ image on Raspberry Pi 3 is:
\subsubsection{Date Structure Initialize}
0.233997 second.
\subsubsection{Image Loading}
1.364710 second.
1.268126 second.
\subsubsection{Region Separation}
About 4.6 microsecond per separation. Figure~\ref{fig:computeTime}


+ 23
- 21
trunk/Main.aux View File

@ -46,27 +46,29 @@
\@writefile{toc}{\defcounter {refsection}{0}\relax }\@writefile{toc}{\contentsline {section}{\numberline {III}Data Size Decision Framework}{6}}
\newlabel{sec:design}{{III}{6}}
\@writefile{toc}{\defcounter {refsection}{0}\relax }\@writefile{toc}{\contentsline {subsection}{\numberline {\unhbox \voidb@x \hbox {III-A}}Region Represent Grammar}{6}}
\@writefile{lof}{\defcounter {refsection}{0}\relax }\@writefile{lof}{\contentsline {figure}{\numberline {3}{\ignorespaces PNG image, size = 46KB}}{7}}
\newlabel{fig:pngImage}{{3}{7}}
\@writefile{lof}{\defcounter {refsection}{0}\relax }\@writefile{lof}{\contentsline {figure}{\numberline {4}{\ignorespaces Region separate by CFG}}{7}}
\newlabel{fig:SeparateImage}{{4}{7}}
\@writefile{lof}{\defcounter {refsection}{0}\relax }\@writefile{lof}{\contentsline {figure}{\numberline {3}{\ignorespaces System Architecture}}{7}}
\newlabel{fig:SystemArchitecture}{{3}{7}}
\@writefile{toc}{\defcounter {refsection}{0}\relax }\@writefile{toc}{\contentsline {subsection}{\numberline {\unhbox \voidb@x \hbox {III-B}}Data Structure and Region Selection Algorithm}{7}}
\@writefile{loa}{\defcounter {refsection}{0}\relax }\@writefile{loa}{\contentsline {algorithm}{\numberline {1}{\ignorespaces Segment Tree Preprocess}}{8}}
\newlabel{code:SegmentTreePreprocess}{{1}{8}}
\@writefile{loa}{\defcounter {refsection}{0}\relax }\@writefile{loa}{\contentsline {algorithm}{\numberline {2}{\ignorespaces Region Selection}}{8}}
\newlabel{code:RegionSelection}{{2}{8}}
\@writefile{lof}{\defcounter {refsection}{0}\relax }\@writefile{lof}{\contentsline {figure}{\numberline {4}{\ignorespaces PNG image, size = 46KB}}{8}}
\newlabel{fig:pngImage}{{4}{8}}
\@writefile{lof}{\defcounter {refsection}{0}\relax }\@writefile{lof}{\contentsline {figure}{\numberline {5}{\ignorespaces Region separate by CFG}}{8}}
\newlabel{fig:SeparateImage}{{5}{8}}
\@writefile{loa}{\defcounter {refsection}{0}\relax }\@writefile{loa}{\contentsline {algorithm}{\numberline {1}{\ignorespaces Segment Tree Preprocess}}{9}}
\newlabel{code:SegmentTreePreprocess}{{1}{9}}
\@writefile{loa}{\defcounter {refsection}{0}\relax }\@writefile{loa}{\contentsline {algorithm}{\numberline {2}{\ignorespaces Region Selection}}{9}}
\newlabel{code:RegionSelection}{{2}{9}}
\@writefile{toc}{\defcounter {refsection}{0}\relax }\@writefile{toc}{\contentsline {section}{\numberline {IV}Performance Evaluation}{9}}
\newlabel{sec:eval}{{IV}{9}}
\@writefile{toc}{\defcounter {refsection}{0}\relax }\@writefile{toc}{\contentsline {subsubsection}{\numberline {\unhbox \voidb@x \hbox {IV-}1}Date Structure Initialize}{9}}
\@writefile{toc}{\defcounter {refsection}{0}\relax }\@writefile{toc}{\contentsline {subsubsection}{\numberline {\unhbox \voidb@x \hbox {IV-}2}Image Loading}{9}}
\@writefile{toc}{\defcounter {refsection}{0}\relax }\@writefile{toc}{\contentsline {subsubsection}{\numberline {\unhbox \voidb@x \hbox {IV-}3}Region Separation}{9}}
\newlabel{sec:conclusion}{{V}{9}}
\@writefile{toc}{\defcounter {refsection}{0}\relax }\@writefile{toc}{\contentsline {section}{\numberline {V}Conclusion}{9}}
\@writefile{lof}{\defcounter {refsection}{0}\relax }\@writefile{lof}{\contentsline {figure}{\numberline {5}{\ignorespaces 4KB Image by Proposed Method}}{10}}
\newlabel{fig:4KMy}{{5}{10}}
\@writefile{lof}{\defcounter {refsection}{0}\relax }\@writefile{lof}{\contentsline {figure}{\numberline {6}{\ignorespaces 4KB Image by JPEG}}{11}}
\newlabel{fig:4KJpeg}{{6}{11}}
\@writefile{lof}{\defcounter {refsection}{0}\relax }\@writefile{lof}{\contentsline {figure}{\numberline {7}{\ignorespaces Proposed method and JPEG comparing}}{12}}
\newlabel{fig:compareToJpeg}{{7}{12}}
\@writefile{lof}{\defcounter {refsection}{0}\relax }\@writefile{lof}{\contentsline {figure}{\numberline {8}{\ignorespaces Computation Time of Separate Regions}}{12}}
\newlabel{fig:computeTime}{{8}{12}}
\@writefile{lof}{\defcounter {refsection}{0}\relax }\@writefile{lof}{\contentsline {figure}{\numberline {6}{\ignorespaces 4KB Image by Proposed Method}}{10}}
\newlabel{fig:4KMy}{{6}{10}}
\@writefile{toc}{\defcounter {refsection}{0}\relax }\@writefile{toc}{\contentsline {subsubsection}{\numberline {\unhbox \voidb@x \hbox {IV-}1}Date Structure Initialize}{10}}
\@writefile{lof}{\defcounter {refsection}{0}\relax }\@writefile{lof}{\contentsline {figure}{\numberline {7}{\ignorespaces 4KB Image by JPEG}}{11}}
\newlabel{fig:4KJpeg}{{7}{11}}
\@writefile{toc}{\defcounter {refsection}{0}\relax }\@writefile{toc}{\contentsline {subsubsection}{\numberline {\unhbox \voidb@x \hbox {IV-}2}Image Loading}{11}}
\@writefile{toc}{\defcounter {refsection}{0}\relax }\@writefile{toc}{\contentsline {subsubsection}{\numberline {\unhbox \voidb@x \hbox {IV-}3}Region Separation}{11}}
\newlabel{sec:conclusion}{{V}{11}}
\@writefile{toc}{\defcounter {refsection}{0}\relax }\@writefile{toc}{\contentsline {section}{\numberline {V}Conclusion}{11}}
\@writefile{lof}{\defcounter {refsection}{0}\relax }\@writefile{lof}{\contentsline {figure}{\numberline {8}{\ignorespaces Proposed method and JPEG comparing}}{12}}
\newlabel{fig:compareToJpeg}{{8}{12}}
\@writefile{lof}{\defcounter {refsection}{0}\relax }\@writefile{lof}{\contentsline {figure}{\numberline {9}{\ignorespaces Computation Time of Separate Regions}}{12}}
\newlabel{fig:computeTime}{{9}{12}}

+ 0
- 1
trunk/Main.bcf View File

@ -1981,7 +1981,6 @@
<!-- SECTION 0 -->
<bcf:bibdata section="0">
<bcf:datasource type="file" datatype="bibtex">SOCA17.bib</bcf:datasource>
<bcf:datasource type="file" datatype="bibtex">ERICA.bib</bcf:datasource>
</bcf:bibdata>
<bcf:section number="0">
<bcf:citekey order="1">Middleton2015</bcf:citekey>


+ 51
- 35
trunk/Main.log View File

@ -1,4 +1,4 @@
This is pdfTeX, Version 3.14159265-2.6-1.40.17 (MiKTeX 2.9.6210 64-bit) (preloaded format=pdflatex 2018.3.9) 14 MAR 2018 11:35
This is pdfTeX, Version 3.14159265-2.6-1.40.17 (MiKTeX 2.9.6210 64-bit) (preloaded format=pdflatex 2018.3.9) 14 MAR 2018 16:04
entering extended mode
**./Main.tex
(Main.tex
@ -905,17 +905,17 @@ Package biblatex Info: Automatic encoding selection.
\openout5 = `Main.bcf'.
Package biblatex Info: Trying to load bibliographic data...
Package biblatex Info: ... file 'Main.bbl' not found.
No file Main.bbl.
Package biblatex Info: ... file 'Main.bbl' found.
(Main.bbl)
Package biblatex Info: Reference section=0 on input line 105.
Package biblatex Info: Reference segment=0 on input line 105.
(00Abstract.tex
(00Abstract.tex
LaTeX Font Info: External font `cmex10' loaded for size
(Font) <7> on input line 3.
LaTeX Font Info: External font `cmex10' loaded for size
(Font) <5> on input line 3.
) (01Introduction.tex
)
(01Introduction.tex
LaTeX Warning: Citation 'Middleton2015' on page 1 undefined on input line 4.
@ -1004,61 +1004,74 @@ LaTeX Warning: Citation 'Shih17b' on page 5 undefined on input line 34.
LaTeX Warning: Citation 'guo2011simple' on page 5 undefined on input line 37.
[5]) (03Design.tex [6] <figures/real.png, id=67, 481.8pt x 642.4pt>
[5]) (03Design.tex
pdfTeX warning: pdflatex (file ./figures/SystemArchitecture.pdf): PDF inclusion
: found PDF version <1.7>, but at most version <1.5> allowed
<figures/SystemArchitecture.pdf, id=62, 578.16pt x 325.215pt>
File: figures/SystemArchitecture.pdf Graphic file (type pdf)
<use figures/SystemArchitecture.pdf>
Package pdftex.def Info: figures/SystemArchitecture.pdf used on input line 10.
(pdftex.def) Requested size: 516.0pt x 290.26419pt.
[6]
<figures/real.png, id=68, 481.8pt x 642.4pt>
File: figures/real.png Graphic file (type png)
<use figures/real.png>
Package pdftex.def Info: figures/real.png used on input line 33.
(pdftex.def) Requested size: 232.19843pt x 309.59338pt.
<use figures/real.png>
Package pdftex.def Info: figures/real.png used on input line 24.
(pdftex.def) Requested size: 258.0pt x 344.00899pt.
<figures/separate.png, id=68, 481.8pt x 642.4pt>
<figures/separate.png, id=69, 481.8pt x 642.4pt>
File: figures/separate.png Graphic file (type png)
<use figures/separate.png>
Package pdftex.def Info: figures/separate.png used on input line 40.
(pdftex.def) Requested size: 232.19843pt x 309.59338pt.
<use figures/separate.png>
Package pdftex.def Info: figures/separate.png used on input line 31.
(pdftex.def) Requested size: 258.0pt x 344.00899pt.
[7 <./figures/SystemArchitecture.pdf>]
** WARNING: \and is valid only when in conference or peerreviewca
modes (line 66).
Overfull \hbox (31.32147pt too wide) in paragraph at lines 22--35
[][] []
[]
LaTeX Warning: `h' float specifier changed to `ht'.
[7 <./figures/real.png (PNG copy)> <./figures/separate.png (PNG copy)>]
** WARNING: \and is valid only when in conference or peerreviewca
modes (line 57).
) (04Evaluation.tex [8] <figures/my4000.png, id=77, 481.8pt x 642.4pt>
LaTeX Warning: `h' float specifier changed to `ht'.
) (04Evaluation.tex [8 <./figures/real.png (PNG copy)> <./figures/separate.png
(PNG copy)>] [9] <figures/my4000.png, id=87, 481.8pt x 642.4pt>
File: figures/my4000.png Graphic file (type png)
<use figures/my4000.png>
Package pdftex.def Info: figures/my4000.png used on input line 10.
(pdftex.def) Requested size: 309.60315pt x 412.81078pt.
<figures/quality3.jpg, id=78, 481.8pt x 642.4pt>
<figures/quality3.jpg, id=88, 481.8pt x 642.4pt>
File: figures/quality3.jpg Graphic file (type jpg)
<use figures/quality3.jpg>
Package pdftex.def Info: figures/quality3.jpg used on input line 17.
(pdftex.def) Requested size: 309.60315pt x 412.81078pt.
<figures/compareToJpeg.pdf, id=79, 642.4pt x 385.44pt>
<figures/compareToJpeg.pdf, id=89, 642.4pt x 385.44pt>
File: figures/compareToJpeg.pdf Graphic file (type pdf)
<use figures/compareToJpeg.pdf>
Package pdftex.def Info: figures/compareToJpeg.pdf used on input line 26.
(pdftex.def) Requested size: 516.0pt x 309.61102pt.
<figures/computeTime.pdf, id=80, 642.4pt x 385.44pt>
[10 <./figures/my4000.png (PNG copy)>]
<figures/computeTime.pdf, id=93, 642.4pt x 385.44pt>
File: figures/computeTime.pdf Graphic file (type pdf)
<use figures/computeTime.pdf>
Package pdftex.def Info: figures/computeTime.pdf used on input line 41.
(pdftex.def) Requested size: 516.0pt x 309.61102pt.
) (05Conclusion.tex) (06Acknowledge.tex) [9]
) (05Conclusion.tex) (06Acknowledge.tex)
LaTeX Warning: Empty bibliography on input line 143.
\svn@write=\write7
\openout7 = `Main.svn'.
[10 <./figures/my4000.png (PNG copy)>] [11 <./figures/quality3.jpg>] [12 <./fig
ures/compareToJpeg.pdf> <./figures/computeTime.pdf
[11 <./figures/quality3.jpg>] [12 <./figures/compareToJpeg.pdf> <./figures/comp
uteTime.pdf
pdfTeX warning: pdflatex (file ./figures/computeTime.pdf): PDF inclusion: multi
ple pdfs with page group included in a single page
@ -1067,6 +1080,9 @@ ple pdfs with page group included in a single page
LaTeX Warning: There were undefined references.
LaTeX Warning: Label(s) may have changed. Rerun to get cross-references right.
Package biblatex Warning: Please (re)run Biber on the file:
(biblatex) Main
(biblatex) and rerun LaTeX afterwards.
@ -1076,10 +1092,10 @@ Package logreq Info: Writing requests to 'Main.run.xml'.
)
Here is how much of TeX's memory you used:
22508 strings out of 493333
425212 string characters out of 3139189
928411 words of memory out of 3000000
25620 multiletter control sequences out of 15000+200000
22513 strings out of 493333
425347 string characters out of 3139189
929148 words of memory out of 3000000
25622 multiletter control sequences out of 15000+200000
37218 words of font info for 72 fonts, out of 3000000 for 9000
1141 hyphenation exceptions out of 8191
64i,8n,56p,1077b,1282s stack positions out of 5000i,500n,10000p,200000b,50000s
@ -1096,9 +1112,9 @@ KTeX 2.9/fonts/type1/public/amsfonts/cm/cmsy8.pfb><C:/Program Files/MiKTeX 2.9/
fonts/type1/urw/times/utmb8a.pfb><C:/Program Files/MiKTeX 2.9/fonts/type1/urw/t
imes/utmr8a.pfb><C:/Program Files/MiKTeX 2.9/fonts/type1/urw/times/utmri8a.pfb>
Output written on Main.pdf (12 pages, 639925 bytes).
Output written on Main.pdf (12 pages, 648352 bytes).
PDF statistics:
150 PDF objects out of 1000 (max. 8388607)
157 PDF objects out of 1000 (max. 8388607)
0 named destinations out of 1000 (max. 500000)
59 words of extra memory for PDF output out of 10000 (max. 10000000)
64 words of extra memory for PDF output out of 10000 (max. 10000000)

BIN
trunk/Main.pdf View File


+ 0
- 1
trunk/Main.run.xml View File

@ -81,7 +81,6 @@
</requires>
<requires type="editable">
<file>SOCA17.bib</file>
<file>ERICA.bib</file>
</requires>
</external>
</requests>

BIN
trunk/Main.synctex.gz View File


+ 1
- 1
trunk/Main.tex View File

@ -70,7 +70,7 @@
%%\addbibresource{bibliography.bib}
%%\addbibresource{IEEEabrv.bib}
\addbibresource{SOCA17.bib}
\addbibresource{ERICA.bib}
%%\addbibresource{ERICA.bib}
%%%%%%%%%%%%%%%%%
\input{MySetting}


BIN
trunk/Slide/Parameterized Data Reduction Framework for IoT Devices.pptx View File


BIN
trunk/figures/SystemArchitecture.pdf View File


Loading…
Cancel
Save