Thursday, October 25, 2012

Histogram Equalization

Histogram is a graphical representation of the tonal values of an image. In greyscale image we have only one channel, and its value range from 0 to 255. Based on the number of pixels falling under a particular tonal value we plot the histogram graph. 

Grey histogram Equalization

In Grey histogram equalization we enhance contrast of images by using usable pixel values in the image to the close contrast values of the display device. This is accomplished by spreading out the most frequent intensity values to the neighboring intensities. First we compute the cumulative distributive function of histogram. Then we use the general histogram formula to accomplish this.
The general histogram equalization formula is:

Where cdfmin is the minimum value of the cumulative distribution function, M × N gives the image's number of pixels and L is the number of grey levels used. Some sample results are shown below. Note that equalization is done within the two region of interest.

(a)   Original Image

(b)   Histogram for ROI 1 before and after equalization ( x = 10, y = 10, sx = 300, sy = 300)



(c)    Histogram for ROI 2 before and after equalization ( x =315, y =315, sx = 400, sy = 400)

(d) Resulted equalized image



Color Histogram Generation (RGB)

In color images we have three channels namely red, green and blue. Therefore in order to generate the histograms we apply the same principle for greyscale images separately for each channel.

Color Histogram Equalization (RGB)

In color histogram normalization we apply the same method used for grey-scale equalization for each of the three channels separately. The results are shown below.

(a)   Original Image
(b) Histogram for ROI 1 – Before and after equalization - RGB ( x = 10, y = 10, sx = 300, sy = 300)

(c)   Resulted equalized image (all channels together)

(d)   Equalization separately - Original and  RGB channels respectively (Clockwise)



               

HSI histogram generation

HSI is a color model which is more intuitive and perceptually relevant co-ordinate space. It has wide applications in the field of computer graphics and vision. Like RGB it has three channels namely Hue (H), Saturation(S) and Intensity (I). Hue falls from 0 to 360 degrees, and other two falls from 0 to 1. For the sake of calculations we make saturation to fall between 0 to 99 and Intensity to fall from 0 to 255 respectively. For HSI histogram generation we need to convert the RGB color model to HSI. We then generate histogram for the three channels separately. The formula for HSI to RGB and from RGB to HSI is below from the Gonzales and Woods text.


HSI histogram equalization

After generating HSI histogram we then compute the equalized histogram from it. For that we make use of the same histogram formula we used before. After equalization we then convert it back to RGB image for display. The resulted images are shown below.

(a)   HSI histogram for I channel – Before and after equalization (x = 10, y =10, sx = 300, sy = 300)
(b)    HSI equalization (original and I channel respectively)


Program

Some of the functions I used to do above operations in C++ are in Git here. Note that the custom class Image and ROI are not included here. However this code is enough for explaining the main logic.



Image thresholding and smoothing

Thresholding helps to visualize the useful portion of the image, whereas smoothing helps to remove noise from the Image.  The images captured through a camera are generally in analog format and therefore it is digitized by sampling. The digitized image is stored in a computer in the form of pixel arrays. Each pixel contains the information such as the color or intensity of that portion of the image. Pixel size depends on the sample size of the digitization process. In image thresholding each pixel value is either set to black or white depending on a user defined threshold value. In smoothing the pixel values are set based on the values of the neighboring pixels.


Image Scaling in Grayscale images

Image scaling is the process of incrementing the pixel values based on user defined scaling factor. Image scaling helps to increase the brightness of the image. In grey scale image scaling the user defined value is added to the pixel value. Below is the image scaling result with a scaling factor 25, 50 in the Region of interest (ROI) one and two respectively. The brightness and hence the quality of image is increased by scaling. The original image is shown in left side and the scaled image on the right side.


Image thresholding in Grayscale images

Thresholding in the Grayscale images is done by setting the pixel values to 0 or 256 based on a threshold value defined by the user. For example if the threshold value is 125 then all pixel values below 125 are set to 0 and all pixel values above 125 are set to 256. In this way the image features are more easily identifiable. Below is the image thresholding result on a grey scale image with thresholding values 100, 150 for ROI one and two respectively.
Image thresholding in Color Images

In color thresholding the distance between the RGB pixels of the image with the user defined color values is then compared with a user defined threshold TC. If the distance falls below the threshold then those pixels are set to white and all other pixels are set to black. A sample result with RGB (25,25,25) with TC = 205 and TC = 100 for ROI one and two are shown below respectively.


Adaptive Weighted thresholding in Grayscale images

In normal thresholding the parameter provided by the user affects the entire image. Hence it might give undesirable results in images which have a wide variability in intensity. Adaptive weighted thresholding helps to overcome this issue by using a weight factor W which is compared with the mean of the pixels falling inside the odd sized window with the current pixel as the center. The result of such a thresholding with Window size 3, 5 and threshold factor 15, 20 on ROI one and two are shown below respectively.

One dimensional Smoothing in Grayscale Images

In one dimensional smoothing the pixels values are averaged along one dimension of the two dimensional pixel plane. The result of one dimensional smoothing in X and Y direction separately is shown below with windows size 3, 5 on ROI one and two respectively. The window size should be selected reasonably so that blurring is kept to the minimal level possible.

Two dimensional Smoothing in Grayscale images

In two dimensional smoothing the mean of the whole window in the two dimensional plane is used as the pixel value. The result of two dimensional smoothing with window size 3 and 5 on ROI one and two is shown below.
Program

Some of the functions I used to do above operations in C++ are in Git here. Note that the custom class Image and ROI are not included here. However this code is enough for explaining the main logic.




Tuesday, October 23, 2012

Comparing Wi-Fi performance using experiments

File downloading via HTTP is one of the common tasks we do in the internet. Delays in file download can be caused by factors such as propagation, processing, file size and traffic on the network. Delays caused by propagation are unavoidable, and therefore our interest is to keep the other delay factors to the minimal level possible. In this paper we design and conduct experiments from two public Wi-Fi spots to identify the effects of key factors that are perceived to be affecting the file download time over the internet. By doing so we explore if ping can be used to identify the file downloading time and the impact of location on file download time. From the results and analysis, we conclude on the comparative performance of the two Wi-Fi spots using statistical models.

INTRODUCTION

File downloading is a common task we do on internet. The factors that affect the delay for file downloading includes the traffic on the network, file size, time and other related things. Nowadays Wireless Networks are made available free of cost at public places by many private business dealers in order to improve the customer satisfaction. Such Wi-Fi hotspots are often jammed with lot of users which causes delay in file downloading.

In this paper we perform experiments on two Wi-Fi spots with a view to understand the comparative performance with regards to file downloading. HTTP is a common internet protocol used for file retrieving. We compare the HTTP file downloading performance in two separate Wi-Fi hot spots at University Mall located near Tampa. Ping request is generally used by network administrators to check the status of remote servers. By default Ping request sends a 32 byte ICMP packets to the server. The server in turn echoes back the same packet. Using the ping request thus we will be able to get an estimate of the total round trip time of the packet. In this paper we also do experiments to understand if ping can be used to predict the file download time.

The related work section gives a brief overview on the research ongoing regarding the performance of file download. It is followed by the experimental setup and results. Finally the results are analyzed and a conclusion is made.


RELATED WORK

File downloading performance has been a major field of research among network scientists in the past decade. Researchers have suggested that slow start in TCP cause delays for small file downloads. Some researchers have suggested that the DNS causes delays in file downloading. The delay in downloading is caused by numerous factors which is hard to theoretically estimate. However time of the day and file size is known to play a major role in deciding the file download time. Prefetching the file at server was suggested by some researchers to improve the performance. It has been also suggested that the performance can be increased when the server rejects requests that exceed a threshold.

EXPERIMENTS

The experiment was conducted in two the free Wi-Fi spot available at the University Mall.

Setup

A HTTP server was setup at the ENB building on USF campus. The C based HTTP web server called Weblite written by Dr. Ken Christensen at USF was used to do the experiment. A UNIX based Client was setup to request the file at the server through HTTP GET request. Two files with size 16 Kb, 484 Kb were respectively used to do the experiment. We assume these files as small and large based on some pre-experimental speed test on the location. Each experiment had 10 HTTP get requests send to the server. Once a request is finished the Client would wait for 100 milli-seconds before sending the next request. In this way ten experiments were done, with an interval of five seconds between each experiment. The experiment was done at two different Hot Spots at University mall, namely Food Court and Center Court. Experiments were conducted during peak traffic hours in noon and during less traffic hours at night. Each experiment has ten sample mean file download time. Ping time was also noted using command line tool during the experimentation and the average of random sample was used.

Results

The results of the experiments are tabulated in table one. It shows the Mean file download time for each of the eight scenarios along with the mean ping time for each experiment. The mean for each experiment is based on the sample download time during the experiment. Separate columns are shown based on the time of day and the file download size.

TABLE I. EXPERIMENT RESULTS


















Comparison

We have the sample mean file download time for the 10 experiment done in the eight different scenarios. Now we would like to compare the results using confidence interval estimate. First we determine the difference between sample means. Then we use T-scores to get the normal distribution. The result is shown in table 2.

TABLE II. COMPARATIVE RESULTS















We also checked if ping can predict the download time. As we know ping gives the total round trip time for 32 bytes of data. Therefore the one-way download time would be half of it. Using the mean ping time for random samples in table one, we can calculate the estimated download time by multiplying the one byte ping time with the file size. The result is shown in Table 3.


TABLE III. PING TIME


As we can see the ping can mostly predict the file download time for smaller files, however for larger files it’s not able to do so.

ANALYSIS

The reason for variation in the file download time over the two locations is not so obvious from the results. Traffic in the network is likely to be one issue. In the food court there were more people in afternoon when compared to the center court. Similarly at night in the center room there were more people than in the food court (since food court was closed at night). The impact of file size is still not very clear, however as the average shows the large file download was slightly faster than small file download. It could be the impact of slow start in the TCP protocol. Similarly the reason why ping is not able to predict large file download time is because of the packet loss observed during the ping experiment. It means that the link is not reliable for long period of time where random delays occur due to unknown factors.

CONCLUSION

The experiments were conducted to compare the two public Wi-Fi spots. The results showed that one of the Wi-Fi was better than the other. Similarly we showed that ping can predict the download time if the random delays during large file downloads can be avoided.