EXPLOITING PRE-CALCULATED DISTANCES IN NEAREST NEIGHBOR SEARCH ON QUERY IMAGES FOR CBIR Jing Yi Tou1, Yong Haur Tay1, Phooi Yee Lau2 1

Computer Vision and Intelligent Systems (CVIS) Group, Universiti Tunku Abdul Rahman, Malaysia E-mail: {toujy,tayyh}@utar.edu.my 2 Hanyang University, Seoul, Republic of Korea E-mail: [email protected]

ABSTRACT Nearest neighbor algorithms can be implemented on content-based image retrieval (CBIR) and classification problems for extracting similar images. In k-nearest neighbor (k-NN), the winning class is based on the k nearest neighbors determined by comparing the query image against all training samples. In this paper, a new nearest neighbor search (NNS) algorithm is proposed using a two-step process. The first step is to calculate the distances between all training samples. The second step is to discriminate samples falling outside the potential region of being a nearest neighbor, hence reducing the computation required. Experimental results showed that the proposed algorithm is able to obtain all nearest neighbors within the defined search radius. Therefore, the classification rate is identical to k-NN but less training samples are compared. It is shown that only 27.13% of the training samples are computed for 1024 Brodatz textures of thirty-two classes at a radius of 0.2 and 0.56% for a single best neighbor search. Experimental results also showed that the proposed algorithm is faster than k-NN during a single best neighbor search but significantly slower for a search with a defined search radius.

Keywords: nearest neighbor search, content-based image retrieval, texture classification

1. INTRODUCTION The k-nearest neighbor (k-NN) is a simple but effective machine learning algorithm used for classification problems. It compares the query sample against all the existing training samples. Then, by using a majority vote, the winning class will be determined from the k number of nearest neighbors determined [1]. The similarities between the samples are usually determined using the Euclidean distance [2], but some features may not fall on the Euclidean space, e.g. covariance matrices, therefore other distance metrics are selected for these cases [3]. Although the k-NN is a simple classifier which requires more computations compared to some other classifiers, such as the neural network, the algorithm is useful when there is a limited number of training samples available which is not suitable to be implemented in neural networks.

Besides classification problems, the nearest neighbor can also be used on CBIR based on information such as, color, texture and shape. In CBIR, images with similar contents comparing to the query images can be extracted from large databases [4]. Texture analysis can be a useful method to compare the similarity between the contents of two images at a higher level by analyzing the textural patterns within the images. Other than CBIR, texture analysis has also been implemented in a wide range of applications, including biomedical, satellite or aerial image analysis, document analysis [5], iris recognition [6], wood species recognition [7,8], industrial defects inspection [9] and etc. The problem with k-NN is that when the number of training samples is higher, the time required for computing the distances between the query samples against all the training samples are higher as well. If a large set of textural features is used with the k-NN on an embedded platform that has a lower computational capability, the number of distances required to be calculated will be a major concern [10]. Therefore, by eliminating the need to compare with the training samples that are definitely not close to the query images, less computation of distances will be required. The main objectives of this paper are:  To propose a nearest neighbor search (NNS) algorithm that uses pre-calculated distances between the training samples to search for the nearest neighbors with fewer computation.  To apply the proposed algorithm on nearest neighbor search-based implementations, i.e. NNS problem and classification problem. The rest of the paper is organized as follows: Section 2 explains the proposed algorithm of the NNS and the concept of the algorithm. Section 3 describes the experimental materials used in the experiments. Section 4 discusses the experimental results and findings for the NNS and classification problem. Finally, Section 5 concludes the paper.

2. NNS USING PRE-CALCULATED DISTANCES In this paper, the proposed algorithm will search for the

nearest neighbors using the pre-calculated distances between all training samples. The algorithm is divided into two steps; 1) preparation stage; and 2) query stage. The first step calculates the features and distances of the training samples and determines a radius to prepare the information for the next step. The second step is to obtain the nearest neighbors for the query image based on the information calculated in the first step.

2.1 Step 1: Preparation Stage For the preparation stage of the training samples, the textural features will first be calculated. After that, the distances between all the training samples will be calculated based on the features and a radius will be determined to define the distances that are allowed to be selected as the nearest neighbors. This calculated information is required to be used in the query stage.

2.1.1 Features Criterion The grey level co-occurrence matrices (GLCM) are used as the features for the algorithm. The GLCM is a popular and useful textural feature that is still widely used ever since it was introduced by Haralick et al. in 1973 [11]. The second order textural features are usually extracted from the GLCMs to be used as the features [12], however in this paper, we did not generate the second order textural feature but instead we use the raw GLCM itself as the feature set. In our previous work, the raw GLCM can outperform the second order textural features in the texture classification problem [10]. In this paper, we generate four GLCMs for the directions of 0°, 45°, 90° and 135° with spatial distance of one pixel and eight grey levels. Therefore each image will have a feature set of 256 features.

2.1.2 Distances Criterion A distance metric will be used to calculate the similarity for all the training samples against each other. In this paper, Euclidean distance is used because it is popular and fast [4]. For each training samples, the calculated distances will be sorted from the smallest to the biggest. The distance between the same training samples is always zero and is therefore unnecessary to be stored while the distance from the training sample X to training sample Y will be identical to the distance from Y to X. Therefore, for the number of distances nD that are required to be computed and stored is:

n D  nT  1 / 2

(1)

where nT denotes the number of training samples.

2.1.3 Radius Criterion A radius r defines the maximum distance allowed between the query image and the training sample retrieved. The r can be defined as a constant value. The selection of the r will affect the width of the search region. The larger the r, the nearest neighbors obtained will include neighbors that are further away, which is located within the defined r. For

the experiments conducted in this paper, we select r ϵ {0.1, 0.2, 0.3, 0.4, 0.5}.

2.2 Step 2: Query Stage When there is a query for an image, the features will be calculated from the query image as described in Section 2.1.1. The calculated features will be used to compare against the training samples that falls within the search range in order to search for the final set of training samples that are nearest to the query image. The training samples are regarded as candidates during the search.

2.2.1 Selecting Nearest Neighbors The query image will be compared against one of the training samples to obtain a distance d. A set of potential candidates will be selected if their pre-calculated distances to that particular training sample are within the range of [d - r, d + r]. From the selected candidates, all candidates will be examined using the same criteria. When every next candidate is compared, the candidate list generated for that particular candidate will be compared against the current candidate list and only those existing on both lists will remain in the current candidate list for the next processes until all candidates in the list are tested to fall within the range of [d - r, d + r] from the query image.

2.2.1 Selecting Single Nearest Neighbor In order to further reduce the number of samples explored, the algorithm can be used to search only for a single nearest neighbor. The search will be similar with the previous experiments except for the r that will be replaced by the smallest distance between the query images to the training image that are discovered. With the r decreasing every time a smaller distance is discovered, fewer samples are required to be explored. This is more suitable to be used for classification problems as it only produces a single nearest neighbor.

2.3 Formulation of the Proposed Algorithm The algorithm works due to the nature of the relationship between the distances. Take an example of a two dimensional feature space, two sample points B and C with the same distance d1 to sample point A will fall under the same circle with radius of d1 with A being the center point as shown in Figure 1. The concept is also true on higher dimensional feature spaces which will form a hyper sphere for the points with the same distance to A. If a sample point D is having distance d2 from A, the minimum possible distance from D to B or C will be d2 – d1 and the maximum possible distance is d2 + d1 as shown in Figure 2.

Fig. 1: Sample point B and C with same distance d1 to A.

Fig. 2: Minimum and maximum possible distance from sample point D to B and C.

Fig. 4: Sample points B or C definitely not falling within the radius of r for D.

Fig. 5: The region forms a ring shape indicates the areas where the potential candidates are falling in.

When the distances are stored, the orientations of the sample points against each other are unknown, so it is impossible to determine the distance from D to B or C for certain. Therefore, if the minimum possible distance from D to B or C is less than r defined for D, it means that B or C could possibly be less than r to D as shown in Figure 3 or else it would never be possible to be less than r to D as shown in Figure 4.

Fig. 6: An overlapped region is formed when the second region of ring is introduced.

Fig. 3: Sample points B or C possibly falling within the radius of r for D. Due to the possibilities that B or C are not falling within the radius r for D, the actual possible range of the selected candidates under such criteria will include sample points outside the anticipated region and form a ring against A as shown in Figure 5. A potential candidate E can be compared against D with the same method that was done on A. Another ring will be formed and both rings will have an overlapped region as shown in Figure 6. Potential candidates that fall outside the overlapped region can therefore be eliminated.

By examining all potential candidates, the overlapped region will contract to the exact region of interest when redundant candidates are fully eliminated. Using this method, candidates that are not potentially falling in the region of interest at the first place are never considered which will reduce the computation required for examining these sample points.

3. EXPERIMENTAL DATASET The dataset that is used in the experiments of this paper is the Brodatz texture dataset. There are thirty-two textures that are selected from the entire Brodatz texture dataset that are used in [13]. Each of the textures are partitioned into sixteen segments, each segments are rotated, scaled, as well as both rotated and scaled. Each of these has the size of 64 × 64 pixels. Eight of the sixteen segments and their respective variations are randomly selected as the training samples for each texture and the eight other segments and

their respective variations as the testing samples in each selection. In the experiments of this paper, ten sets of training and testing samples are randomly selected and the average results are shown in the experimental results. The thirty-two textures are shown in Figure 7.

Fig. 7: The thirty-two textures selected from the Brodatz texture dataset [13].

4. EXPERIMENTAL RESULTS AND ANALYSIS The experiments are conducted on the dataset for the NNS problem and the classification problem.

4.1

Experiments on NNS Problem

The experiments are conducted on the dataset in order to search for the nearest neighbors within r which is defined in 2.1.3. The percentages of training samples that are explored, nearest neighbors that are selected and the ratio for the selected nearest neighbors against the explored training samples are shown in Table 1 where the vertical axis denotes r. The experimental results are an average of all ten sets of training and testing datasets. Table 1: Experimental results for the average percentage (%) of nearest neighbors selected, training samples explored and ratio for nearest neighbors selected against training samples explored for different r. Nearest Nearest Neighbors Samples r Neighbors Selected against Explored Selected Samples Explored 0.11 4.12 2.67 0.1 7.03 27.13 25.91 0.2 35.60 61.28 58.09 0.3 68.06 82.34 82.66 0.4 86.36 93.56 92.30 0.5 The nearest neighbors obtained using the algorithm is verified against the nearest neighbors that are obtained by comparing the query image with every training sample and the results are identical for the given radius. From the experimental results, it shows that the algorithm is capable of identifying the nearest neighbors without exploring all the training samples. It is also shown that when the r is larger, the nearest neighbors that are selected are higher. When the r is 0.1, there are only very little or no nearest

neighbors that are selected, therefore the r must not be too small. The experimental results also shown that the ratio for selected nearest neighbors against the explored training samples is smaller when the r is smaller. This indicates that more training samples that are not selected as a nearest neighbor is explored when the r is smaller. This has provided a space for improvements in future implementation to reduce the need to explore samples that are not within r. The proposed algorithm differs from conventional NNS algorithms such as KD-Trees, Metric Trees and Cover Trees as these conventional algorithms divide the problem space using trees into smaller regions containing certain numbers of samples prior to the query stage [14]. These NNS algorithms will first identify the region the query sample falls in and only inspect the samples within the selected region first and backtrack up along the tree to discover other potential nearest neighbors. In our work, the proposed algorithm only requires the pre-calculated distances between all the training samples and the region of interest will be defined for each query and the region of interest will reduce in size as the search proceed and the best neighbors will be promised during the end of the search without the need to explore other regions for verification.

4.2 Experiments on Classification Problem For the classification problem, the experimental results for the average accuracy of the k best neighbors selected using the proposed algorithm compared to k-NN are shown in Table 2 in percentage where the horizontal axis denotes the methods and the vertical axis denotes the k. Table 2: Experimental results for texture classification of the k best neighbors selected using the proposed algorithm compared to k-NN (%). Proposed Algorithm k k-NN r = 0.1 r = 0.2 r = 0.3 r = 0.4 r = 0.5 79.47 1 81.05 81.05 81.05 81.05 81.05 76.41 77.83 77.83 77.83 77.83 2 81.05 76.87 77.97 77.97 77.97 77.97 79.09 3 75.60 76.77 76.77 76.77 76.77 79.50 4 76.10 77.40 77.40 77.40 77.40 79.02 5 75.42 76.56 76.56 76.56 76.56 78.89 6 75.39 76.54 76.54 76.54 76.54 78.04 7 74.46 75.70 75.68 75.68 75.68 77.71 8 74.46 75.70 75.66 75.66 75.66 77.15 9 74.77 74.71 74.71 74.71 76.70 10 73.43 It is shown that the best experimental result obtained is 81.05% when r is 0.2 to 0.5 and k is 1 which is the same as k-NN. At an r of 0.2, only 27.13% of the training samples are required to be compared against the query image compared to k-NN. From the experimental results, it is shown that the nearest neighbor will never outperform the

k-NN in terms of classification rate because in theory, the proposed algorithm will only eliminate the samples that are far from the query image, but both the proposed algorithm and k-NN uses the nearest neighbors to classify the problem, therefore the best classification rate that can be achieved by the proposed algorithm is the same as the k-NN when the selected r is appropriate.

4.3 Experiments on Single Nearest Neighbor Search The experiment is conducted on the dataset in order to search for a single best neighbor as defined in 2.2.1. The percentages of training samples that are explored are far less than the previous search in 4.2. The experimental results show that only 0.56% of the training samples are explored. There is an average of 5.69 training samples explored in a set of 1024 training samples. When more training samples are involved, the percentage of samples explored is expected to be smaller as the average number of training samples is expected to be a considerably small value as well. This search allows the r to be updated when a smaller distance is discovered which further contract the search radius that reduces the exploration of training samples. With far less training samples to be explored, there is less time required for the search. The comparison of the time required for the proposed algorithm and the k-NN is shown in Table 3. The computer used for the experiment is with an Intel® Core™ Duo T2350 processor of 1.86GHz and 2 GB of RAM running on Windows Vista™ Home Premium. The experiments are conducted using MATLAB®. Two different k-NNs are used in this experiment. The first k-NN is based on the same metric function used in the proposed algorithm and the second k-NN is the knnclassify function provided in the MATLAB Bioinformatics Toolbox. Table 3: Comparison of average time required for a single query between the proposed algorithm and k-NN. Methods Time (ms) Proposed Algorithm (r = 0.2) 4992 Proposed Algorithm (Single Neighbor) 39 k-NN (k = 1, Self-coded) 72 23 k-NN (k = 1, knnclassify) The experimental results show that with 1024 training samples, the duration time of the proposed algorithm is significantly slower than both k-NN for a selected r. For the single best neighbor search, the duration time is slower than the knnclassify function but is faster than the self-coded k-NN. The experimental results showed that the proposed algorithm has not been optimized to the best performance compared to the k-NN function which is provided by MATLAB which is more optimal in operations. Therefore, if the codes are optimal, the proposed algorithm for a single neighbor search is expected to outperform the k-NN which requires the calculation of all the distances

between the query sample and training samples. With only 1024 training samples involved, the proposed NNS algorithm with a selected r is significantly slower due to the complexity of the algorithm that could have used up most of the time for the overhead. These overheads are caused by the process to inspect and update the candidate list. When the number of training samples is small, these overhead will contribute a large part of computational time during the search. Therefore, the proposed algorithm is predicted to show a better performance when it is applied to a huge dataset of training samples, the significance of the overhead on the computational time will be expected to be lower.

5. CONCLUSIONS It is discovered that the proposed NNS algorithm is capable of determining all the nearest neighbors correctly within a defined radius. Therefore, it could be used to search for images with the highest similarity with the query image. The algorithm can also be implemented in image analysis problems for texture classification purposes. Experimental results, shown using the Brodatz texture dataset, suggest that the classification rate will be the same if the correct r is selected even though it will not be able to outperform the k-NN. The speed performance of the proposed NNS algorithm using a selected r is significantly slower than the k-NN. This is likely due to the small size of the dataset used in the experiment that fails to show the advantage in the speed performance of the algorithm. The proposed algorithm is however expected to perform better when a huge dataset is involved and when calculation of the distance is more extensive. On the other hand, the performance of the proposed NNS algorithm for a best neighbor is better than the k-NN using the same metric function. The knnclassify function in the MATLAB Bioinformatics Toolbox indicates that the algorithm can be further optimized for a better performance. Our future work focuses on implementing the NNS on a large dataset without a complete set of pre-calculated distances between all the training samples due to the impracticality of computing all distances within the large dataset. Therefore, the algorithm can only be used to eliminate impossible samples for sure and will remain to keep a list of potential nearest neighbors.

6. REFERENCES [1] Ripley, B.D., Pattern recognition and neural networks, Cambridge University Press, United Kingdom, 1996. [2] Perlovsky, L.I., Neural networks and intellect using model-based concepts, Oxford University Press, New York, 2001. [3] Tuzel, O., Porikli, and F., Meer, P., “Region Covariance: A Fast Descriptor for Detection and

Classification,” European Conference on Computer Vision, vol. 1, pp. 697-704, 2006. [4] Datta, R., Joshi, D., Li, J., and Wang, J.Z., “Image Retrieval: Ideas, Influences, and Trends of the New Age,” ACM Computing Surveys, vol. 40, no. 2, 2008. [5] Pietikainen M.K. (Ed.). Texture analysis in machine vision, World Scientific Publishing, Singapore, 2000. [6] Ng, R.Y.F., Tay, Y.H., and Mok, K.M., “Iris Recognition Algorithms Based on Texture Analysis,” Proceedings International Symposium on Information Technology 2008, vol. 2, pp. 904-908, Institute of Electrical and Electronics Engineers Inc, Kuala Lumpur, 2008. [7] Lew, Y.L., Design of an Intelligent Wood Recognition System for the Classification of Tropical Wood Species, M. E. Thesis, Universiti Teknologi Malaysia, 2005. [8] Tou, J.Y., Tay, Y.H., and Lau P.Y., “Rotational Invariant Wood Species Recognition through Wood Species Verification,” Proc. 1st Asian Conference on Intelligent Information and Database Systems, pp. 115-120, The Institute of Electrical and Electronics Engineers Inc, 2009. [9] Tuceryan, M., and Jain, A.K., “Texture Analysis,” C.H. Chen, L.F. Pau, P.S.P Wang (Eds.), The Handbook of Pattern Recognition and Computer Vision (2nd Edition), World Scientific Publishing Co, 1998. [10] Tou, J.Y., Khoo, K.K.Y., Tay, Y.H., and Lau, P.Y., “Evaluation of Speed and Accuracy for Comparison of Texture Classification Implementation,” Proc. Int’l Workshop on Advanced Image Processing, 2009. [11] Haralick, R.M., Shanmugam, K., and Dinstein, I., “Textural Features for Image Classification,” IEEE Transactions on Systems, Man and Cybernatics, pp. 610-621, 1973. [12] Petrou, M., and Sevilla, P.G., Image Processing: Dealing with Texture, Wiley, 2006. [13] Ojala, T., Pietikainen, and M., Kyllonen, J., “Gray level Cooccurrence Histograms via Learning Vector Quantization,” Proc. 11th Scandinavian Conference on Image Analysis, pp. 103-108, 1999. [14] Kibriya, A.M., and Frank, E., “An Empirical Comparison of Exact Nearest Neighbour Algorithms,” Proceedings of the 11th European conference on Principles and Practice of Knowledge Discovery in Databases, vol. 4702, pp. 140-151, 2007.

sample of a proceedings paper

Computer Vision and Intelligent Systems (CVIS) Group,. Universiti Tunku Abdul .... Euclidean distance is used because it is popular and fast [4]. For each training ...

421KB Sizes 1 Downloads 201 Views

Recommend Documents

sample of a proceedings paper
comparing to the query images can be extracted from large databases ... A radius r defines the maximum distance allowed between the query .... The experiments are conducted on the dataset for the NNS .... Dealing with Texture, Wiley, 2006.

sample paper Aptoinn nata sample model question paper - 1.pdf ...
Page 1 of 1. SAMPLE SET - 1. Note:These questions are the collections of student's contributions from different forums & websites. www.aptoinn.in. 917630 1689 / 98847 22837 / 91765 62187. NATA & J.E.E - B.Arch portions covered. We provide the student

sample paper Aptoinn nata sample model question paper - 10.pdf ...
PART – B MARKS : 100 TIME: 2 Hours. 1 You are visiting a mall where there are food stalls. you are enjoying. your cold drink in the ground floor of the 5 storey atrium space. seeing the people moving on the escalators , shopping , performing. cultu

sample paper Aptoinn nata sample model question paper - 6.pdf ...
Our Branches: Chennai (Tambaram/Saidapet/Anna nagar/Thiruvanmiyur) / Coimbatore / Salem. NATIONAL APTITUDE TEST IN ARCHITECTURE. PART – B ...

Aakash Institute Sample Paper
(1) Tidal energy. (2) Temperature difference ..... 86. Find the odd one out. (1) Mango. (2) Banana. (3) Apple. (4) Carrot. 87. Complete the series. ? (1). (2). (3). (4) ...

sample paper Aptoinn nata sample model question paper - 9.pdf ...
colleges, nata test centres, nata scoring process, scope of architecture, nata exam dates, nata. sample question papers, nata syllabus, nata online exam e.t.c.. Course options: One Month/15 days /Week end/One year online course/Study materials. Our B

sample paper Aptoinn nata sample model question paper - 4.pdf ...
There is a inter school quiz competition going on in an historical. school assembly hall.There are two ... Sketch a logo for a TV channel. 25. 3. Using atleast 4 ...

Sample Placement Paper of Aakash Institute.pdf
Sample Paper. Aakash ..... In a certain code, "Beautiful I peacock saw" is given .... (1) Mango. (2) Banana. (3) Apple. (4) Carrot. 87. Complete the series. ? (1). (2).

sample swap paper - Core
were on average spotted 73 ms slower when their offset was misaligned (930 ms) with the intended word. Table 1. Mean segmental durations (ms) as a function ...

sample swap paper - Core
française de Belgique (A.R.C. 96/01-203) and from the Swiss. FNRS (Project ... Belgium. e-mail: [email protected]. 6. REFERENCES. [1] Cutler, A., & Norris, ...

CAPF Mathematics Sample [email protected] ...
(Paper) Central Armed Police Forces (CAPF) Exam ... A wheel of radius 30 cm ... Displaying CAPF Mathematics Sample [email protected].

ECT2008 Proceedings paper 16
processor clock rate and memory storage. At the same time, the .... ing a Probabilistic Collocation approach [8] by solving (1) for the parameter values ak.

ECT2008 Proceedings paper 135
the rest of it is named ¯Γ1. Analogously, region Ω2 boundary Γ2 is divided into Γ21 for the contact and ¯Γ2 for the free surface. ..... grouted pipe-roofing reinforcement method for tunnelling”, International Jour- nal for Numerical and Anal

ECT2008 Proceedings paper 149
geometric information and the latter adds attributes to the imported CAD ... system requirements of a more automated CAD-FEA transformation scheme. ... move to the next step—usually CAM—and this requires the digital product data .... processor sh

sample paper Aptoinn nata sample model question paper - 3 (1).pdf ...
There was a problem loading more pages. Whoops! There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the ...

hpScience-Sample-Paper 3.pdf
2- lw;Z dh e/; ijr dks --------------------- dgrs gSa\. Page 3 of 25. hpScience-Sample-Paper 3.pdf. hpScience-Sample-Paper 3.pdf. Open. Extract. Open with. Sign In.

English Sample Paper 1.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. English Sample Paper 1.pdf. English Sample Paper 1.pdf. Open. Extract. Open with. Sign In. Main menu.

chemistry-sample-paper-18.pdf
Write the diseases caused by the deficiency of Vitamin A,C,D & E 2. 16. Explain about 2. 17. Write the IUPAC names of (i)HOCH=CH-CH2-CH2OH. (ii)CH3-O-CH(CH3)-CH2-CH3 2. 18. How will you distinguish between the following. (i)Propane-2-ol and benzyl al

Sample-Question-Paper-BSAUEEE.pdf
MODEL QUESTION PAPER. Details: Exam Type: Objective Questions. Duration: 1:30 hrs. Total Questions: 100 X 1 = 100 Marks. For All B.Tech. Programmes (Except Biotechnology and Cancer Biotechnology). SPILT UP. Mathematics 50 X 1 = 50 Marks. Physics 25 X

MAT-Sample-Paper-5.pdf
4 Dec 2005 - Questions. Disclaimer: All these questions have been memorised by PT students. We are merely reproducing a few of them here in fragments to ensure. that the huge community of students eagerly waiting to see an objective comparison of the

CAPF Current Affairs Sample [email protected] ...
Related Content And Stuffs. Please Visit Us At. WWW.NOTESANDPROJECTS.COM. ------COMPLETE EDUCATION SOLUTION-------. Page 1 of 6 ...

VITEEE 2017 Sample Paper Chemistry.pdf
An unknown amine is treated with an excess of methyl iodide. Two equivalents of methyl iodide. react with the amine. The amine is treated with silver oxide and ...

chemistry-sample-paper-17.pdf
Page 1 of 3. SAMPLE PAPER-2013. CLASS-XII. Subject:- CHEMISTRY. TIME-3HRS M.M-70. INSTRUCTIONS- 1. Q.-1 TO 8, CARRY ONE MARK,. 2. Q- 9 TO 18 CARRY TWO MARKS. 3. Q- 19 TO 27 CARRY THREE MARKS. 4. Q- 28 TO 30 CARRY FIVE MARKS. 1.Define Reverse Osmosis

chemistry-sample-paper-40.pdf
Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. chemistry-sample-paper-40.pdf. chemistry-sample-paper-40.pdf.