Mapping of Individual Oil Palm Trees Using Airborne Hyperspectral Sensing : An Overview

This overview represents a preamble step for developing an approach for mapping individual oil palm trees from airborne hyperspectral imaging. The study generally describes airborne hyperspectral sensors in different fields particularly in agriculture by comparing and analyzing their uniqueness for different applications. The emphasis is on the image processing in identifying and mapping of the individual oil palm trees with the utilization of image histogram to examine the RGB bands. An algorithm is design to discover the involvement of different materials in a single mixed pixel and converting it into a pure pixel. The techniques employ in this connection are Linear Spectral Mixture Analysis (LSMA), Mix to Pure Converter (MPC) and Euclidean Norm.


Introduction
Trees contribute the major role in economic growth of any country therefore Palms are one of the most well-known and extensively cultivated plant family.They have had an important function in our daily life for example it is used in beverages, building material, chemical or industrial products, cosmetics, feeds, furniture, fuel etc.Many products are derived from palm like palm oil, it is the form of edible vegetable oil obtained from the fruit of the oil palm tree and the second most widely produced edible oil, after soybean oil.Demand for palm oil is raising and expected to climb further, particularly for the use of biodiesel; it is promoted as a form of renewable energy that greatly reduces next emission of carbon dioxide into atmosphere, and decrease the impact of green house effect.Keeping this importance in view, this study is aimed at investigating the technological impacts on oil palm plantation using different techniques for example hyperspectral airborne remote sensing.This review presents general idea of hyperspectral remote sensing, mapping oil palm trees and other species and observes the technological development.Hyperspectral Remote sensing, a new technology, in combination with a land information system is believed to be a good technique to assist the agricultural land and managers in making fast decisions.It is quite supportive for the researchers and scientists to explore environment, atmosphere, minerals, plantations and vegetations etc.It is a direct geo-referencing system that provides the ability to directly relate the collected data to remote sensing system to the earth and accurately measure the position.
It is also recognized as image spectroscopy.Generally physicist and chemists use this expertise of image spectroscopy in the laboratory for the detection of matter and their composition for over 100 years.In mid 80' geologists used it for the mapping of mineral.Discovery of material is actually depending on spectral coverage, spectral resolution, and signal to noise of spectrometer.The new system that merges both imaging and spectrocopy is hyperspectral remote sensing .It can be classified in two categories like multipectral and hyperspectral.
Multispectral imagery is produced by sensors that measure reflected energy within several specific sections (also called bands) of the electromagnetic spectrum.Multispectral sensors usually have between 3 and 10 different band measurements in each pixel of the images they produce.Examples of bands in these sensors typically include visible green, visible red, near infrared, etc. Landsat, QuickBird, and SPOT satellites are well-known satellite sensors that use multispectral sensors.
Hyperspectral sensors measure energy in narrower and more numerous bands than multispectral sensors.Hyperspectral images can contain as many as 200 (or more) contiguous spectral bands.The numerous narrow bands of hyperspectral sensors provide a continuous spectral measurement across the entire electromagnetic spectrum and therefore are more sensitive to subtle variations in reflected energy.Images produced from hyperspectral sensors contain much more data than images from multispectral sensors and have a greater potential to detect differences among land and water features.For example, multispectral imagery can be used to map forested areas, while hyperspectral imagery can be used to map tree species within the forest.Jiang et al. (2004) described that hyperspectral technology imaging spectrometry technology, is one of the important leading research fields of remote sensing.Since the first imaging spectrometer was produced in 1983, in less than 20 years, hyperspectral remote sensing technology has been successfully applied in many fields, and shown great potential and bright prospects (Vane and Goetz, 1993).However, up to now the research on processing and applications of hyperspectral remote sensing data has fallen far behind the research on sensors.Research on processing, analysis and information extraction of hyperspectral data should be strengthened to determine more useful information, make full use of the advantage and potential of hyperspectral remote sensing technology and promote the development of new and important technology (Mazer et al., 1988).

Hyperspectral remote sensing
Remote sensing presents an important tool for explore, monitoring and analyzing vegetation, water ,soil and wetland system etc and data acquire from the aircraft and satellite platforms have been widely used for mapping and modification for the same.The image data obtained from hyperspectral sensor is composed of many very narrow contiguous spectral bands ranging through the visible, near Infra Red (IR), mid-IR, and Thermal IR portion of the electromagnetic spectrum.It collects more than 200 bands of data, to explain an effective and constructive reflective spectral of every pixel and spectral signature.Later than atmospheric and topographic effects are applied, examine and compare the images with field and laboratory reflectance and distinguish, map and analyze minerals related with the deposit.The analysis of remotely sensed data is performed using a variety of image processing techniques, including: analog (visual) image processing, and digital image processing.Analog and digital analysis of remotely sensed data seeks to detect and identify important phenomena in the scene.Once identified, the phenomena are usually measured, and the information is used in solving problems.Optimum results are often achieved using a synergistic combination of both visual and digital image processing.
Digital image processing is used for many applications, including: weapon guidance systems (e.g., the cruise missile), medical image analysis (e.g., x-raying a broken arm), nondestructive evaluation of machinery and products (e.g., on an assembly line), and analysis of Earth resources.This class focuses on the art and science of applying remote sensing digital image processing for the extraction of useful Earth resource information.Earth resource information is defined as any information concerning terrestrial vegetation, soils, minerals, rocks, water, certain atmospheric characteristics, and urban infrastructure.

Airborne hyperspectral imaging for different applications
The most appropriate extensively used technology for mapping rationale is Airborne and spaceborne imagery (Neto, 2001).The miscellaneous knowledge obtains from airborne and spaceborne sensors utilize to resolve lots of problems in comprehensive study of the earth.A survey of a few sensors and existing imagery considered by the author the most relevant is also presented.The two main platforms used at typical altitudes of 250km to 400km are the American Space Shuttle and, for now, the Russian MIR orbital station.Both the Space Shuttle and the MIR stations permit human interaction onboard allowing quick decisions and intervention on physical related problems.Also, by flying at lower altitudes, these sensors obtain better resolution information of the surface, not to mention the possibility of using non-electronic photographic cameras.
The experiment carried out on space shuttle suitable for mapping MOMS (Modular Optoelectronic Multipectral scanner) because the extra advantage that it can take the simultaneous views of earth surface, beside taking the sterio images with smaller time delay between them so it can avoid sun angle variation and different illuminations of scenes (Ebner et al., 1988;Neto, 1993;and Kramer, 1994).The characteristics of these sensors and the resulting imagery and resolution are also summarized.Its suitability for mapping at local, regional and global scales is examined Opto-electronic sensor imagery is becoming suitable for map production at scales that were until recently only possible with aerial photography.The fact that these sensors are mounted on airborne and spaceborne platforms has the additional advantage of allowing studies and the production of maps of the Earth's surface also on a global scale.Also, this kind of imagery is more stable than if mounted on low altitude aircrafts, the orientation methods being high-accurate modeling algorithms.Digital format is an advantage for data storage and permits the automation of most of the procedures necessary to implement to a map production procedure.However, the procedures needed for the radar data preparation are still more complex than for data acquired by passive sensors.Although the data studied in this study are expensive, once the whole map production system is installed and prepared to manipulate, the costs can be competitive, as well as the information given by the increasing amount and diversity of data available, which approximate the recent market demands.
The airborne thermal infrared hyperspectral imaging system, Spatially Enhanced Broadband Array Spectrograph System (SEBASS), was flown over Mormon Mesa, NV, to provide the first test of such a system for geological mapping in May 1999.Several types of carbonate deposits were identified using the 11.25-mm band.However, massive calcrete outcrops exhibited weak spectral contrast, which was confirmed by field and laboratory measurements.Because the weathered calcrete surface appeared relatively smooth in hand specimen, this weak spectral contrast was unexpected.(Kirkland et al., 2002) monitor that microscopic roughness not readily apparent to the eye has introduced both a cavity effect and volume scattering to reduce a spectral contrast.The macro roughness of crevices and cobbles may also have significant cavity effect.The Mormon Mesa site studied is approximately 6 miles west of Mesquite, NV (latitude 36.45 0 , longitude 114.15 0 ).The study focused on the adequate use of airborne hyperspectral to detect and characterise of unexpected effects that are not reproduced in standard laboratory measurements.The centre of the study is to minimize the instrumental supplies whenever possible .The effect of weather on the object that makes the field signature feeble at this time it is necessity when construe spectral data, consider the feeble signature introduce by the possible presence of lower spectral contrast material.In this research (Kirkland et al., 2002) differentiated between the three activities of remote sensing i.e. detection, discrimination and identification astonishingly.Detection involves a spectral signal that is consequential above the noise level, discrimination is spectral signal to detectable and difference from adjacent material and identification is both discrimination and a spectral band.For example, remotely sensed spectra may be converted to apparent emissivity and compared to laboratory spectra scaled in emissivity (Kahle and Alley, 1992).Emissivity is the measured radiance divided by the blackbody radiance at the target kinetic temperature.When the true target temperature is not known, it must be estimated, and apparent emissivity is the measured radiance divided by the blackbody radiance calculated at the estimated target temperature (Conel, 1969).The results demonstrate critical importance of exceeding the minimum defined instrument requirements whenever possible.If the objectives include identification of materials that may be weathered and/or rough, then it should be remembered that the field signature of these materials is likely to be weak.This effect should be studied further using field and airborne hyperspectral instruments with sufficient sensitivity and spectral resolution to ensure detection and characterization of unexpected effects that are not reproduced in standard laboratory measurements.These steps are required to ensure instrumentation that will meet the SNR and spectral resolution necessary to detect and identify desired field materials.When interpreting spectral data, it is essential to consider the uncertainties introduced by the possible presence of lower spectral contrast materials, and the possibility that targeted materials may be missed entirely.
Hyperspectral remote sensing data with bandwidth of nanometer (nm) level have 10 or even several 100 of channels and contain abundant spectral information.Therefore, different channels have their own properties and show the spectral characteristics of various objects in the image.Rational feature selection from the varieties of channels is very important for effective analysis and information extraction of hyperspectral data.The site of the study was Shunyi region of Beijing, comprehensively analyzed the spectral characteristics of hyperspectral data.On the basis of analyzing the information quantity of bands, correlation between different bands, spectral absorption characteristics of objects and object separability in bands, a fundamental method of optimum band selection and feature extraction from hyperspectral remote sensing data was proposed.Feature selection is one of the most important steps in recognition and classification of remote sensing images.It is impossible to classify an image accurately and effectively without rational and efficient feature selection.This is especially important for hyperspectral remote sensing data (Mausel, 1990;Price, 1994;and Hsu and Tseng, 1999).Abundance in spectral information and power in distinguishing objects are the advantages of hyperspectral data.But, it does not mean that the more bands are used, the better because of the following points.There is evident correlation between bands, will destroy the normal distribution of spectra and influence the classification accuracy.If all the bands are used in classification without selection, the classification precision will decrease instead of increase.Second, the more bands are selected, the more training samples are needed to classify correctly.It is very difficult for hyperspectral data to find sufficient and correct training samples that can meet the demand of the classifier if too many bands are chosen in classification.Third, increase of band number in classification will inevitably result in the increase of processing time and cost that will reduce the processing speed and benefit.Since hyperspectral remote sensing data is rich in spectral information, thus evaluating and selecting optimal feature parameters for a concrete application goal is very important to make full use of such information in hyperspectral data and recognizing objects effectively and accurately.
The evaluation of new neural network classifier introduced by (Wang et al., 2007) by using spectrally sampled image data to map mixed halophytic vegetation in tidal environments.The work is based on the concept of vegetation communities, mixtures of several species, characteristic of salt marshes.The study site is the Venice lagoon, and the material available is a spectrally sampled Compact Airborne Spectral Imager (CASI) image, in conjunction with ground truth for precise characterization of vegetation communities.Detailed observations of vegetation species and of their fractional abundance were collected for 36 Regions of Interest (ROI): such field polygons are used for classification training and accuracy assessment.To select the most significant spectral channels, the Spectral Reconstruction method was applied to the image data: a set of 6 bands was selected as optimal for classification, out of the 15 available.The spatial heterogeneity of salt-marsh vegetation is significant and even at the spatial resolution of the airborne CASI image data, mixed pixels are observed.The Vegetation Community based Neural Network Classifier (VCNNC) is introduced to cope with a situation where no pure pixels exist, and was applied to the set of 6 selected bands.Both quantitative and qualitative comparisons of classification results of VCNNC with those of conventional Neural Network Classifier (NNC) trained and assessed on exactly the same data sets.Land cover mapping using spectral data relies on the relationship established between radiometric data (attributes) and independent observations of target (land cover) attributes and does not necessarily require the most radiometrically accurate at-surface data.There are for example indications that, in particular, atmospheric corrections on airborne-acquired image may not lead to higher classification accuracy (Hoffbeck and Landgrebe, 1994).The developed method then applied an approach for mapping highly mixed vegetation in salt marshes.The approach includes the use of methodologies to identify the spectral features containing the largest amount of information and the application of a Neural Network classifier to produce vegetation maps.The Spectral Reconstruction method, based on the spectral information content, was chosen for optimal band selection.The application of this method to at-sensor CASI radiances extracted from training pixels showed that the information required for optimal vegetation mapping is contained in a subset of 6 spectral bands.These results are confirmed by experiments performed on the at-sensor radiance simulated from detailed high spectral resolution field measurements, which yielded a very similar set of selected bands.The coherence between results from airborne remotely-sensed and field observations indicates that little influence of atmospheric effects should be expected on the procedure of band selection.
Neural Network methods were preferred because they are capable of handling large amounts of data and do not require simplifying hypotheses on the statistical distribution of radiometric attributes.The Vegetation Community based Neural Network Classifier (VCNNC) was introduced.VCNNC training makes use of the detailed knowledge of intra-pixel fractional content of vegetation species (defining a particular vegetation community with a certain field polygon).Thus, VCNNC does not require "pure" training pixels, contrary to traditional classification methods, such as Maximum Likelihood and the usual Neural Network Classifiers (NNC).This circumstance is important in salt-marsh areas, where vegetation distribution can be heterogeneous at the pixel scale, thus leading to highly-mixed pixels.
The results in this case illustrate that the overall classification accuracy of VCNNC is 91.6% and only 84.17% for NNC.Furthermore, accuracy analysis applied to the two classification results shows a smaller classification error for VCNNC compared to NNC, and that the classification accuracy difference is related to the mixture degree: larger differences correspond to a higher mixing degree of the vegetation community.In other words, training dataset based on actual fractional abundance of species within a vegetation community used for neural network training leads to better classification results than those obtained using a training dataset which arbitrarily defines pure pixels on the basis of a majority rule.Moreover, the VCNNC provides sub-pixel fractional abundance of vegetation species, rather useful information for studies of salt-marsh ecology.However, since VCNNC is a supervised classification for working with areas of mixed vegetation, detailed and careful field-work is necessary for acquiring the accurate information of training samples in order to get reliable classification results.

Comparative analysis of airborne hyperspectral sensors
The Imaging Spectrometer Data Analysis System (ISDAS) developed by the Canada Centre for Remote Sensing and MacDonald Dettwiler and Associates on Sun Microsystems SPARC workstations using the Application Visualization System (AVS) software package to meet the requirements for efficient processing and analysis of hyperspectral data acquired with airborne as well as future spaceborne sensors.Various visualization tools have been developed for rapid exploratory analysis together with preprocessing and information extraction tools for numerical analysis.Linkages to a spectral database and a conventional image analysis system were established to support the analysis.(Staenz et al., 1997) express that, ISDAS is being used for multidisciplinary applications development in areas such as agriculture, environment, and exploration geology using physically based analysis approaches to retrieve information from hyperspectral data.This continuing effort, in collaboration with industry will lead to streamlined procedures that are important to take hyperspectral satellite remote sensing towards an operational mode.
An overview of this study is that the integrated system is designed to process airborne data, as well as data acquired by future spaceborne imaging spectrometers incorporating the simulation of future sensor data in the spectral domain from existing hyperspectral sensor data.With these objectives in mind, various visualization tools have been developed together with data input/output, preprocessing, and information extraction tools, and linkages to a spectral database and a conventional image analysis software package.These tools provide the functionalities to go from calibrated data to surface reflectance, to interactively view and analyze data, to extract qualitative and quantitative information, and to output results.They have been applied to CASI, FLI, SF%, and AVIRIS data in areas such as forestry, agriculture, and environmental monitoring and assessment.The tools are built into AVS, a commercial graphics software product running on Sun Microsystems SPARC workstations.This software environment is based on a modular design that guarantees the necessary flexibility for further modifications and additions with respect to future sensor data and related data processing technologies.Borner et al. (2001) emphasized that consistent end-to-end simulation of airborne and spaceborne earth remote sensing systems is an important task, and sometimes the only way for the adaptation and optimisation of a sensor and its observation conditions, the choice and test of algorithms for data processing, error estimation and the evaluation of the capabilities of the whole sensor system.The presented software simulator SENSOR (Software Environment for the Simulation of Optical Remote sensing systems) includes a full model of the sensor hardware, the observed scene, and the atmosphere in between.The simulator consists of three parts.The first part describes the geometrical relations between scene, sun, and the remote sensing system using a ray-tracing algorithm.The second part of the simulation environment considers the radiometry.It calculates the at-sensor radiance using a pre-calculated multidimensional lookup-table taking the atmospheric influence on the radiation into account.The third part consists of an optical and an electronic sensor model for the generation of digital images.Using SENSOR for an optimization requires the additional application of task-specific data processing algorithms.The principle of the end-to-end-simulation approach is explained, all relevant concepts of SENSOR are discussed, and first examples of its use are given.The verification of SENSOR is demonstrated.This work is closely related to the Airborne PRISM Experiment APEX., an airborne imaging spectrometer funded by the European Space Agency.
This module describes the hardware of the remote sensing system considering the aspects of signal and system theory (Jahn and Reulke, 1995).It is divided in to an optical and an electronic part.The aim is the calculation of digital numbers from the at-sensor radiance given either by the radiative transfer module of SENSOR or by radiance values provided by other hyperspectral remote sensing systems.The complex end-to-end simulation tool SENSOR has been presented in this study.It allows modeling a large variety of optoelectronic remote sensing systems due to its modular and open structure.SENSOR includes models for the sensor hardware itself, the observed scene, and the atmosphere.Advanced features are implemented, e.g.ray tracing, fast and flexible access to atmospheric LUTs, sky view factor, point-spread function, and noise sources.With this tool, the interactions between parameters of the object-environment-sensor system, data processing, and output quantities, such as data accuracy and costs, can be evaluated.Mustapha and Hutton (2001) in their study of position and orientation measurement systems used to directly georeference airborne imagery data, and present the accuracies that are attainable for the final mapping products.The Applanix Position and Orientation System for Airborne Vehicles (POS/AVTM) has been used successfully since 1994 to georeference airborne data collected from multispectral and hyperspectral scanners, LIDAR's, and film and digital cameras.The POS/AVTM uses integrated inertial/GPS technology to directly compute the position and orientation of the airborne sensor with respect to the local mapping frame.A description of the POS/AVTM system is given, along with an overview of the integrated inertial/GPS processing.An error analysis for the airborne direct geo-referencing technique is then presented.Firstly, theoretical analysis is used to determine the attainable positioning accuracy of ground objects using only camera position, attitude, and image data, without ground control.Besides theoretical error analysis, a practical error analysis was done to present actual results using only the POS data plus digital imagery without ground control except for QA/QC.The consequence of this investigate that the use of POS/AV enables a variety of mapping products to be generated from airborne navigation and imagery data without the use of ground control.The Applanix POS/AVTM direct geo-referencing system (Figure 1) is comprised of four main components: an IMU, a dual frequency low-noise GPS receiver, a computer system (PCS) and a post-processing software suite called POSPacTM.The heart of the system however is the Integrated Inertial Navigation software that is implemented both in real-time on the PCS and in post-mission using the POSPacTM software.In POSPacTM, the GPS measurements are used to aid the inertial navigation solution produced by integrating the IMU outputs to produce a blended position and orientation solution that retains the dynamic accuracy of the inertial navigation solution but has the absolute accuracy of the GPS.

<<Figure 1. Applanix POS/AVTM direct georeferencing system>>
A concise description of the concept of airborne remote sensing without ground control has been introduced through Applanix's POS systems.The basic concepts of inertial GPS integration have been described, along with the accuracy that can be achieved using such techniques.Then the ground accuracy using POS integrated with a digital frame camera was analyzed.The results show that direct geo-referencing can be used to obtain digital orthophotos to an accuracy that can meet many remote sensing requirements.Meanwhile, MultiSpec is a multispectral image data analysis software application designed by Biehl and Landgrebe (2002).It is intended to provide a fast, easy-to-use means for analysis of multispectral image data, such as that from the Landsat, SPOT, MODIS or IKONOS series of Earth observational satellites, hyperspectral data such as that from the Airborne Visible-Infrared Imaging Spectrometer (AVIRIS) and EO-1 Hyperion satellite system or the data that will be produced by the next generation of Earth observational sensors.The primary purpose for the system was to make new, otherwise complex analysis tools available to the general Earth science community.It has also found use in displaying and analyzing many other types of non-space related digital imagery, such as medical image data and in K-12 and university level educational activities.MultiSpec has been implemented for both the Apple Macintoshs and Microsoft Windows operating systems (OS).The effort was first begun on the Macintosh OS in 1988.MultiSpec had its origin in the LARSYS multispectral image analysis system (Phillips, 1973), which was one of the first remote sensing multispectral data processing systems, originally created during the 1960s.(Larry,david 2002) express that multiSpec recognizes several header formats including ArcView, ENVI, ERDAS 7.3 and 7.4 *.lan and *.gis, ERDAS Imagine (4, 8 and 16-bit uncompressed), FastL7A, GAIA, GeoSPOT, GeoTIFF and TIFF uncompressed, HDF-Scientific Data Model (if all in one file), Land Analysis System (LAS), LARSYS Multispectral Image Storage Tape (MIST), LGSOWG, MacSADIE, MultiSpec ASCII classification, PDS, Sun ''Screen Dump'', TARGA uncompressed and VICAR formats.MultiSpec also recognizes ArcView shape files.These capabilities of Multispec provide a state-of the-art capability to analyze moderate and high-dimensional multispectral data sets of practical size.
The increasing number of sensor types for terrestrial remote sensing has necessitated supplementary efforts to evaluate and standardize data from the different available sensors.In this study (Soudani et al., 2006) access the potential use of IKONOS, ETM+, and SPOT HRVIR sensors for leaf area index (LAI) estimation in forest stands.In situ measurements of LAI in 28 coniferous and deciduous stands are compared to reflectance in the visible, near-infrared, and shortwave bands, and also to five spectral vegetation indices (SVIs): Normalised Difference Vegetation Index (NDVI), Simple Ratio (SR), Soil Adjusted Vegetation Index (SAVI), Enhanced Vegetation Index (EVI), and Atmospherically Resistant Vegetation Index (ARVI).The three sensor types show the same predictive ability for stand LAI, with an uncertainty of about 1.0m2/m2 for LAI between 0.5 and 6.9m2/m2.For each sensor type, the strength of the empirical relationship between LAI and NDVI remains the same, regardless of the image processing level considered [digital counts, radiances using calibration coefficients for each sensor, top of atmosphere (TOA), and top of canopy (TOC) reflectance].On the other hand, NDVIs based on radiance, TOA reflectance, and TOC reflectance, determined from IKONOS radiometric data, are systematically lower than from SPOT and ETM+ data.The offset is approximately 0.11 NDVI units for radiance and TOA reflectance-based NDVI, and approximately 0.20 NDVI units after atmospheric corrections.The same conclusions were observed using the other indices.SVIs using IKONOS data are always lower than those computed using ETM+ and SPOT data.Factors that may explain this behavior were investigated.The key parameter ecosystem process is to provide attention to a large extent to LAI (Asner et al., 2003).Different ecophysiological development of a forest ecosystem are strongly controlled by LAI (Machado and Reich, 1999;Vargas et al., 2002) and interception of light (van Dijk and Bruijnzeel, 2001),gross productivity (Coyea and Margolis,1994;Jarvis and Leverenz, 1983).
Based on simulations using the SAIL bidirectional canopy reflectance model coupled with the PROSPECT leaf optical properties model (i.e., PROSAIL), (Soudani et al 2006) demonstrated that the spectral response in radiance of IKONOS sensor in the red band is the main factor explaining the differences in SVIs between IKONOS and the other two sensors.IKONOS, ETM+, and SPOT HRVIR are among the most frequently used sensors for terrestrial applications.Given the subtle responses of canopies to environmental changes, and the small variations of canopy reflectance that are investigated, the intercomparison of these three sensors is an important task that may open new perspectives on spatial and temporal analyses of changes in forest canopies.Based on in situ measurements of LAI in 28 forest stands, the relationships established between LAI and SVI show that the three sensors have the same ability for LAI prediction.On average, the RMSE values from the different SVIs are very close (≈1.0m2/m2).On the other hand, SVIs determined using IKONOS radiometric data are systematically lower than those using SPOT and ETM+.The offset is about −0.11 for radiance and TOA reflectance-based NDVI, and about −0.21 after atmospheric corrections.Factors with the potential to explain these differences were evaluated based on simulations using the SAIL bidirectional canopy reflectance model coupled with the PROSPECT leaf optical properties model (i.e., PROSAIL).The analysis showed that: (a) Using radiance spectral responses from each of three sensors as inputs to the PROSAIL model, IKONOS red reflectance is 53% higher than SPOT and ETM+.The IKONOS near-infrared band is 5% lower.The differences in the red band cause an average negative offset of IKONOS NDVI of about 0.08 for LAI ranging from 0.7 to 6.9m2/m2.The spectral behavior of ETM+ and SPOT may be considered to be identical.(b) The gap between IKONOS and both SPOT and ETM+ for red reflectance and NDVI is LAI-dependent.It increases as LAI increases until the signal saturation threshold is reached (LAI≈4m2/m2).
Based on PROSAIL simulations, and by truncating the radiance spectral response of the IKONOS red band to match that of ETM+, the discrepancies between the two sensors in the red band and in the NDVI can be largely reduced and the output may be considered similar.It follows from these findings that the edge distortion in the red region of IKONOS spectral response in the red band is the main factor explaining the differences between this sensor and both SPOT and ETM+.Finally, (Soudani et al., 2006) concluded that for bare soils or surfaces covered by very sparse vegetation, radiometric data acquired by IKONOS, SPOT, and ETM+ are similar and may be used without any correction.For surfaces with dense vegetation, a negative offset of 10% of IKONOS NDVIs should be considered.
The part of a long-term effort to introduce precision viticulture in the region of Demarcated Region of Douro stated by (Morais et al., 2007) presents the architecture, hardware and software of a platform designed for that purpose, called MPWiNodeZ.A major feature of this platform is its power-management subsystem, able to recharge batteries with energy harvested from the surrounding environment from up to three sources.It allows the system to sustain operation as a general-purpose wireless acquisition device for remote sensing in large coverage areas, where the power to run the devices is always a concern.The MPWiNodeZ, as a ZigBeeTM network element, provides a mesh-type array of acquisition devices ready for deployment in vineyards.In addition to describing the overall architecture, hardware and software of the monitoring system, the observation also reports on the performance of the module in the field, emphasizing the energy issues, crucial to obtain self-sustained operation.The testing was done in two stages: the first in the laboratory, to validate the power management and networking solutions under particularly severe conditions, the second stage in a vineyard.The measurements about the behavior of the system confirm that the hardware and software solutions proposed do indeed lead to good performance.The platform is currently being used as a simple and compact yet powerful building block for generic remote sensing applications, with characteristics that are well suited to precision viticulture in the DRD region.It is planned to be used as a network of wireless sensors on the canopy of vines, to assist in the development of grapevine powdery mildew prediction models.The feasibility of a ZigBee-based remote sensing network, intended for precision viticulture in the Demarcated Region of Douro.The network nodes are powered by batteries that are recharged with energy harvested from the environment.The power-management aspects have been found to be particularly critical, the main issues being the on-off cycles caused by partially charged batteries, and connectivity/network failures that lead to repeatedly unsuccessful connection attempts.Morais et al. (2007) designed the nodes to deal correctly with these issues, and tested the correctness of the solutions adopted by testing the nodes under particularly severe conditions.The testing and deployment of the devices was a two stage process.In the first stage, the devices were tested in the laboratory to validate the solutions that had been implemented, with particular emphasis on the power-management aspects.
The power consumption profiles measured during the tests validated the software-based solution, based on a finite state machine.The second and final stage was the deployment of a network of devices in the field a vineyard, with the cooperation of a winegrower.All results obtained so far confirm that the system works as envisaged, and operates reliably.Morais et al. (2007) concluded that the system nodes are able to sustain themselves based on solar energy alone; in other words, a ZigBee-based sensor network powered by batteries recharged by solar energy alone is feasible, if the networking and power-management issues are handled as proposed.No new hardware or software issues appeared when operating the system in the field.The system is in principle also able to harvest kinetic energy from wind and water in pipes.However, testing of these harvesting techniques has not been performed, for two reasons: first, these energy sources are more relevant to routers, which need a permanent energy supply, than to network nodes, which are less critical and can shut themselves off if necessary.Second, the performance of the nodes was main concern and the main purpose of our study.The system was endowed with the possibility of harvesting from both solar and kinetic energy sources in anticipation of future applications, including for example applications in greenhouses.A router placed inside a greenhouse would clearly benefit from harvesting kinetic energy from water in pipes.

Computerization in image processing and quantifying of individual oil palm trees
Computer science is a vibrant and constantly evolving discipline, concerned with uncovering the laws of universe; obvious examples are physics, chemistry, mathematics, agriculture and environment.Research in this perspective spans a very wide spectrum of activities from information retrieval to animation and image processing.Computer vision is the construction of explicit, meaningful description of any physical object for example soil, wetland, tree etc from images.It is the enterprise of automating and integrating a wide range of processes and representation used for vision perception, including image processing and statistical pattern classification.Image analysis and computer vision constitute a broad and rapidly evolving field.The manipulation encompasses several types of processing techniques, collectively known as digital image or computer graphic; usually stand for a 2-D intensity function.It is represented by a matrix whose rows and columns identify a point in an image and matrix element value identify as a pixel.The consequence of image analysis is exceptionally valuable in the field of environment particularly for the counting of oil palm trees.Quantifying oil palm trees manually, is very time consuming and complex task, because many factors involve at this point for example physical and environmental conditions.Thus oil palm is a significant crop so this gigantic issue is necessary to resolve.So many researchers effort to work out this dilemma, recently high resolution IKONOS images have expanded which can have the accuracy to view the plantation from many angles.Dr Liew Soo Chin at the Centre for Remote Imaging, Sensing and Processing (CRISP) developed software which is supportive and optimal procedure for the monitoring and quantifying and modeling trees as shown in Figures 2 and 3.
Image processing and analysis procedure acquire a pace to achieve an objective.These including the raw data express accurate geographic location and geometric correction.Then image histogram is use to examine the three bands near infrared band displayed in red, red band displayed in green and green band displayed in blue .Subsequently image enhancement is used for visual interpretation.Afterward image classification is occurring according to brightness and color information of each pixel.These comprise of objects within the image likewise water, forest, shrubs etc.In high spatial resolution imagery, details such as buildings and roads can be seen.The amount of details depends on the image resolution.In very high resolution imagery, even road markings, vehicles, individual tree crowns, and aggregates of people can be seen clearly.Pixel-based methods of image analysis will not work successfully in such imagery.In order to fully exploit the spatial information contained in the imagery, image processing and analysis algorithms utilizing the textural, contextual and geometrical properties are required.Such algorithms make use of the relationship between neighboring pixels for information extraction.Incorporation of a-priori information is sometimes required.A multi-resolution approach (i.e.analysis at different spatial scales and combining the resolute) is also a useful strategy when dealing with very high resolution imagery.In this case, pixel-based method can be used in the lower resolution mode and merged with the contextual and textural method at higher resolutions.Individual trees in very high resolution imagery can be detected based on the tree crown's intensity profile.An automated technique for detecting and counting oil palm trees in IKONOS images based on differential geometry concepts of edge and curvature has been developed at CRISP, National University of Singapore.

Hyperspectral pixel detection
The hyperspectral image with spatial resolution is consisting of more than one material especially in the field of agricultural mapping.The plant image is generally composed of leaves, flower, fruit, stem, and ground and all these things reproduce reflection differently.Now for the quantification, at the initial stage it is necessity to design an algorithm to discover these materials and its involvement in each pixel and subpixel to detect the target.Priori model leads to posteriori approach is apply here, the main difference between these two approaches are that the abundance fractions generated by the priori model do not reflect true amounts of fractional images of the image pixel.In contrast of this posteriori model can estimate true amount of abundance fractions can used for target classification and material quantification (Chang, 2003) Linear Spectral Mixture Analysis(LSMA) is consider for detection of target and evaluate with Nonnegative Constrained Matrix Factorization (NCMF).The NCMF is helpful to produce data in positive identification.Then evaluate PPC (Pure Pixel Converter) and MPCV (Mixed to Pure Converter) and finally use minimum distance based formula, the most favorable norm is Euclidean norm.

Linear spectral mixture analysis (LSMA)
LSMA is frequently used to emerge in Mixed Pixel Classification(MPC),in Linear mixing model, L is the spectral band and t 1 ,t 2 ……….t p , are targets.Where m 1 ,m 2 ,…….m p , are target signature (endmember) also denoted by digital number, r is the hyperspectral pixel vector .linearcombination of target m 1 ,m 2 ,……m p , and abundant fractions are α 1 ,α 2 ,…….α p .Suppose that r and M is L*1 column pixel vector and L*p target spectral matrix.
The model for spectral signature can be as follows: r=Mα+n (1) Where n is the noise or can be construing as a measurement error, r is to indicate spectral signature.This is also known as Bayes or priori model.
In equation 1, α must be positive and added equal to 1: When the material in a pixel are arranged just about each other like mixture of diverse natural resources( stone, granite, rocks),here the interaction of light is more than one material therefore the reflected value is closer to non linear combination of the reflection.The reflection of each material is different so linear mixing model does not work correctly.There are several techniques are formed for the estimation of ( ) directly from image (Settle, 1996;Tu et al., 1997;Chang et al. 1998) based on posteriori information given by: χ MPCV is called mixed to pure pixel converter (MPCV),operating on a pixel vector r that assigns r to signature m j for some j.The estimated noise n is absorbed into u j to account for misclassification error.Interpreting eq: 1 and eq: 5, each target signature vector in M represents a distinct class image pixel vector r will be assign to one signature of M through MPCV.Now the binary image is formed and displays target pixels only.(Ren, 2000) suggested a method based on winner-take-all (WTA) thresholding criterion.Suppose there are p target signatures { } Where m j is the jth signature and r be the mixed pixel vector to be classified and α(r) = (α 1 (r), α 2 (r), α p (r)) estimated p-dimensional abundance vector.Estimated abundance fraction  The development of an algorithm is to make comparative analysis of the image .This algorithm is then implement in high level language and produce source code.This source code runs on the Intel Core 2 Duo system, it has 2.00GHz (4CPU), IGB RAM.

Conclusion
Several hyperspectral and image analysis algorithms are developed for target detection in different field.It is very difficult to compare with each other because lack of standardized data.The main problem that there is no any particular rule and process strictly followed to provide information or evidence to prove the algorithm.The designed algorithm comprehend mixed pixel to pure pixel conversion by inflicting on target signature abundance fractions.In result the algorithm reduce to three classes, LSMA, MPC and distance based pure pixel.The WTA based converter use for tot up target pixel thresholding technique.

<<Figure 2 .
Oil palm trees in an IKONOS image >> <<Figure 3. Detected trees (white dots) superimposed on the image>> is the posteriori abundance estimation model3.3Mix to pure converter (MPCV)By comparing pure pixel converter (PPC) to Mix pixel converter (MPC), the estimated abundance vector α must be pure abundance vector and there are only p choices for α .Therefore it is assigned to solve the (MPC) problem only p class classification.
Figures 4 and 5 below illustrated the algorithm details and the framework involve in the advance processing of hyperspectral data.

Figure 4 .
Figure 4. Mix to pure conversion algorithm