Abstract

The downside risk of crop production affects the entire supply chain of the agricultural industry nationally and globally. This also has a profound impact on food security, and thus livelihoods, in many parts of the world. The advent of high temporal, spatial and spectral resolution remote sensing platforms, specifically during the last 5 years, and the advancement in software pipelines and cloud computing have resulted in the collating, analysing and application of ‘BIG DATA’ systems, especially in agriculture. Furthermore, the application of traditional and novel computational and machine learning approaches is assisting in resolving complex interactions, to reveal components of ecophysiological systems that were previously deemed either ‘too difficult’ to solve or ‘unseen’. In this review, digital technologies encompass mathematical, computational, proximal and remote sensing technologies. Here, we review the current state of digital technologies and their application in broad-acre cropping systems globally and in Australia. More specifically, we discuss the advances in (i) remote sensing platforms, (ii) machine learning approaches to discriminate between crops and (iii) the prediction of crop phenological stages from both sensing and crop simulation systems for major Australian winter crops. An integrated solution is proposed to allow accurate development, validation and scalability of predictive tools for crop phenology mapping at within-field scales, across extensive cropping areas.

1. INTRODUCTION: SENSING OF CROP TYPE AND CROP PHENOLOGY FROM SPACE

With the burgeoning challenges that Earth is currently experiencing, mainly caused by the progressively increase in climate extremes, rapid population growth, reduction in arable land, depletion of, and competition for, natural resources, it is evident that the required increase of food production (>60 % of the current) by 2050 (Alexandratos and Bruinsma 2012) is one of the greatest tests facing humanity. It is also apparent that gains in future production are more likely to come from closing the yield gap (increase in yield) rather than an increase in harvested area. For example, from 1985 to 2005 production increased by 28 %, of which a main portion (20 %) came from an increase in yield (Foley et al. 2011). The challenge to produce more food more effectively is so significant that, in 2016, ‘End hunger, achieve food security and improved nutrition and promote sustainable agriculture’ was selected as the second most important, Sustainable Development Goal (‘End Poverty’ was ranked first) by The United Nations (https://www.un.org/sustainabledevelopment/).

Therefore, significant advances in global food production are needed to mitigate the projected food demand driven by rapid population growth and affluence of emerging economies (Ray et al. 2012). Furthermore, climate variability and change, especially the frequency of extreme events, in conjunction with regional water shortages and limits to available cropping land are factors that influence crop production and food security (Fischer et al. 2014; IPCC 2019; Ababaei and Chenu 2020). Therefore, accurate information on the spatial distribution and growth dynamics of cropping are essential for assessing potential risks to food security, and also critical for evaluating the market trends at regional, national and even global levels (Alexandratos and Bruinsma 2012; Orynbaikyzy et al. 2019). This is extremely important for crops growing in arid and semi-arid areas, such as Australia, where the cropping system is highly volatile due to the variability in climate and frequent extreme weather events including extended droughts and floods (e.g. Chenu et al. 2013; Watson et al. 2017). Indeed, Australia has one of the most variable climates worldwide with consequent impact on the productivity and sustainability of natural ecosystems and agriculture (Allan 2000). In addition, cereal yields have plateaued over the last three decades in Australia (Potgieter et al. 2016), while volatility in total farm crop production is nearly double that of any other agricultural industry (Hatt et al. 2012). In this regard, digital technologies, including proximal and remote sensing (RS) systems, have a critical and significant role to play in enhancing food production, sustainability and profitability of production systems. The application of digital technologies has rapidly grown over the last 5 years within agriculture, with the prediction of a growth in production to at least 70 % at farm level, and to a total value of US$1.9 billion (Igor 2018).

1.1 Linking variability in crop development to yield

Crop growth and development are mainly a function of the interactions between genotype, environment and management (G × E × M). This can result in large-scale variability of crop yield, phenology and crop type at subfield, field, farm and regional scales. In most cases, the environment, including climate and soils, cannot be changed and matching crop development progress to sowing date and seasonal conditions is critical in order to maximize grain yield, especially in more marginal environments (Fischer et al. 2014; Flohr et al. 2017). Producers can increase yields by choosing an elite yielding variety adapted to their environment and/or changing cropping practices to align with expected climatic conditions and yield potentials (Fischer 2009; Zheng et al. 2018). Sowing dates and cultivars are targeted in each region so that flowering coincides with the period that will minimize abiotic stress, such as frost, water limitation and/or heat known as the optimal flowering period (Flohr et al. 2017). Despite agronomic manipulation with sowing date and cultivar choice there is still often significant spatial and temporal variation in crop development between and within cropping regions and at the field scale.

The variability in crop development and its link with grain yield across regions and from season to season is captured best by the latest long-term simulations studies that define the optimal flowering periods. For example, final yield correlates strongly to the optimum flowering time (OFT) for sorghum (Wang et al. 2020) across different environments of north-eastern Australia (Fig. 1). Note that in low rainfall years (2017), this correlation can become negative as later flowering crops exhaust the stored water supply. Significant relationships were evident for wheat (Fig. 2) (Flohr et al. 2017), canola (Lilley et al. 2019) and barley (Liu et al. 2020) across south-eastern Australia when the OFT is determined across multiple years. For example, in the cooler high rainfall zone at Inverleigh (Fig. 2) the OFT is narrower and later compared to Minnipa.

Sorghum yield for 28 commercial cultivars versus days to flowering across three seasons at different locations across north-eastern Australia. Adapted from Wang et al. (2020) for sorghum.
Figure 1.

Sorghum yield for 28 commercial cultivars versus days to flowering across three seasons at different locations across north-eastern Australia. Adapted from Wang et al. (2020) for sorghum.

The optimal flowering period (OFP) for a mid-fast cultivar of wheat determined by APSIM simulation for (A) Waikerie, South Australia; (B) Minnipa, South Australia; (C) Yarrawonga, Victoria; (D) Inverleigh, Victoria; (E) Urana, New South Wales; (F) Dubbo, New South Wales. Black lines represent the frost and heat limited (FHL) 15-day running mean yield (kg ha−1). Grey lines represent the standard deviation of the FHL mean yield (kg ha−1). Grey columns are the estimated OFP defined as ≥95 % of the maximum mean yield from 51 seasons (1963–2013). Data source: Flohr et al. (2017). Image: Copyright © 2021 Elsevier B.V.
Figure 2.

The optimal flowering period (OFP) for a mid-fast cultivar of wheat determined by APSIM simulation for (A) Waikerie, South Australia; (B) Minnipa, South Australia; (C) Yarrawonga, Victoria; (D) Inverleigh, Victoria; (E) Urana, New South Wales; (F) Dubbo, New South Wales. Black lines represent the frost and heat limited (FHL) 15-day running mean yield (kg ha−1). Grey lines represent the standard deviation of the FHL mean yield (kg ha−1). Grey columns are the estimated OFP defined as ≥95 % of the maximum mean yield from 51 seasons (1963–2013). Data source: Flohr et al. (2017). Image: Copyright © 2021 Elsevier B.V.

Whilst optimization of flowering times has allowed the combined stresses of drought, frost and heat to be minimized, these abiotic stresses still take a large toll on crops every year (Zheng et al. 2012a; Hunt et al. 2019). Growers often need to adjust crop inputs to reflect yield potential and implement alternative strategies such as cutting frosted crops for fodder. Furthermore, yield losses from these events are often spatially distributed and dependent on the phenology stage. Not only does frost events vary spatially with topography across the landscape from season to season (Zheng et al. 2012a; Crimp et al. 2016), but it is also expected there will be more hot days and fewer cold days in future climates across much of the cereal belt (Collins et al. 2000; Collins and Chenu 2021). However, frost is still expected to remain a concern in affecting crop production in future climates (Zheng et al. 2012a). The additional interaction of temperature, water stress and variable soil moisture can lead to large spatial variability in cereal yields (e.g. wheat, barley, canola, chickpea and field pea) with regional differences. The crop phenology when stress occurs can also further exacerbate the spatial and temporal differences in yields (Flohr et al. 2017; Dreccer et al. 2018a; Whish et al. 2020).

Accurate assessment of plant growth and development is therefore essential for agronomic management particularly for the decisions that are time-critical and growth stage-dependent in order to maximize efficiency of crop inputs and increase crop yields. Detecting crop phenological stages at subfield and across field scales gives critical information that a producer can use to adjust tactical decision, such as in-field inputs of nitrogen, herbicides and/or fungicides as well as other management practices (e.g. salvaging crops, pre-harvest desiccation). These practices are dependent on reliable determination of crop development and are not only maturity-specific but also crop species-specific. Thus, accurately predicting crop phenological stage and crop type is needed to help producers make more informed decisions on variable rate application within and between fields on a farm. This will lead to large cost savings and increase on-farm profitability. Other applications include assisting crop forecasters and managers to monitor large cropping areas where crop phenology could be matched with climatic data to determine the severity and distribution of yield losses to singular or additive effects of events such as frost, heat and crop stress. In addition, by having specific estimates of crop type and phenology will accurately capture the interactions between G × E × M at a subfield level and thus increase the accuracy of crop production estimates when aggregated to a regional scale (McGrath and Lobell 2013). We discuss in detail the digital technologies that are available to predict crop type and crop phenology.

1.2 Crop reconnaissance from earth observation platforms

The advent of RS technologies from earth observation (EO) since the end of 1960s, along with its repetitive and synoptic viewing capabilities, has greatly improved our capability to detect crops on a large scale and even at daily or within-day time visits (Castillejo-González et al. 2009; Heupel et al. 2018). Earth observation technology has been applied in crop-type classification over the past half-century. Due to its large-scale acquisition capability and regular intervals, it is ideal for tracking the phenological stage of a crop. Many studies have tested the efficiency of using either single-date RS imagery (Langley et al. 2001; Saini and Ghosh 2018, 2019), or multi-temporal RS imagery for crop-type identifications (McNairn et al. 2002; Van Niel and McVicar 2004; Potgieter et al. 2007; Upadhyay et al. 2008). Multi-temporal classification approaches have proved more accurate in discriminating between different crop types (Langley et al. 2001; Van Niel and McVicar 2004).

Crop monitoring approaches utilizing meteorological data and EO have been developed globally to support efforts in measuring ‘hot spots’ of likely food shortages and thus assisting issues related to food insecurity. However, such frameworks are tailored to meet the needs of different decision-makers and hence they differ in the importance they place on inputs to the system as well as how they disseminate their results (Delincé 2017a). It is evident that crop monitoring and accurate yield and area forecasts are vital tools in aid of more informed decision-making for agricultural businesses (e.g. seed and feed industry, bulk handlers) as well as government agencies like Australian Bureau of Statistics (ABS) and Australian Bureau of Agricultural and Resource Economics (ABARES) (Nelson 2018). However, there is a current gap constraining the ability to accurately estimate crop phenology and crop species at field scale. This is mainly due to the coarse resolution (≥30 m), and infrequent revisiting periods (≥16 days) of the long-term established satellite imagery utilized by current vegetation mapping systems. Newer satellites, like Sentinel and PlanetScope, have higher spatial and temporal resolution. The massive data throughput of these platforms create challenges and opportunities for near-real-time analysis.

Crop-type classification is of key importance to determine crop-specific management in a spatial-temporal context. Detailed spectral information provides opportunities for improving accuracy when discriminating between different crop types (Potgieter et al. 2007), but multi-temporal data are required to capture crop-specific plant development (Potgieter et al. 2011, 2013). In the Australian grain cropping environment, temporal information is essential for operations such as disease and weed control (where different chemical controls are specified as suited to different crop growth stages) and the sequential decisions of application of nitrogen (N) fertilizer in cereals in the period between mid-tillering and flowering. Globally, current systems such as the ‘Monitoring Agriculture Remote Sensing’ crop monitoring for Europe, ‘CropWatch’ from China as well as the USDA system mainly use MODIS and estimates are available after crop emergence with moderate accuracies (Delincé 2017b). Recently, a study using Sentinel-2 applying supervised crop classification, across Ukraine and South Africa, showed reasonable overall accuracy (>90%) but less discriminative power in specific crop types (>65%) (Defourny et al. 2019). Using time series of Landsat 8 for cropland classifications showed accuracies of 79 % and 84 % for Australia and China, respectively (Teluguntla et al. 2018b). Potgieter et al. (2013) showed that by including the crop phenology attributes like green-up, peak and senescence from crop growth profiles (from MODIS at 250-m pixel level), there were higher accuracies for crop-type discrimination, i.e. 96 % and 89 % for wheat and barley, respectively. The ability to discriminate chickpea was lower but still reasonably high at 75 %. Although this exemplified the likely potential of fusing higher resolution EO with biophysical modelling, the method requires validation across regions and further innovation to determine crop type early in the season and to estimate phenological development within crop species.

Accurate classification of specific crop species from RS data remains a challenge since cropping systems are often diverse and complex, and the types of crops grown, and their growing season vary from region to region. Consequently, the success of RS approaches requires calibration to local cropping systems and environmental conditions (Nasrallah et al. 2018). Other factors that can affect the classification accuracy include (i) field size, (ii) within-field soil variability, (iii) diversified cropping patterns due to different varieties, (iv) mixed cropping systems with different phenological stages, (v) wide sowing windows and (vi) changes in land use cropping patterns (Medhavy et al. 1993; Delincé 2017b; Zurita-Milla et al. 2017). Environmental conditions, such as droughts and floods, can further affect the reflectance signal and thus reduce the classification and prediction accuracies (Allan 2000; Cai et al. 2019). The application of multi-date RS imagery for crop land use classification and growth dynamics estimates have led to an ability to estimate crop growth phenological attributes, which are species-specific and often distinguishable, using time-sequential data of vegetation index profiles and harmonic analysis (Verhoef et al. 1996).

The following sections discuss the application of EO platforms to detect and quantify crop species by fields and the crop growth dynamics, both considered at a ‘regional’ level. Specifically, we cover (i) available EO platforms and products for crop-related studies, (ii) analytical approaches and sensor platforms in detecting crop phenology and discriminating between crop types, (iii) the application of machine learning (ML) algorithms in the classification of crop identification and growth dynamics, (iv) the role of crop modelling to augment crop phenology estimates from RS, (v) current limitations, (vi) proposed framework addressing the challenges and (vii) finally potential implication/s to industry.

2. SATELLITE SENSORS FOR VEGETATION AND CROP DYNAMICS

Currently, there are >140 EO satellites in orbit, carrying sensors that measure different sections of the visible, infrared and microwave regions of the electromagnetic spectrum for the terrestrial vegetation (http://ceos.org/). Figure 3 depicts a list of optical EO constellations currently (by 2020) in orbit and future satellite platforms planned to be launched into space until 2039.

List of current (2020) and planned (up to 2039) EO satellites to target vegetation dynamics on earth (Source: http://ceos.org).
Figure 3.

List of current (2020) and planned (up to 2039) EO satellites to target vegetation dynamics on earth (Source: http://ceos.org).

The features of these instruments depend on their purpose and they vary in several aspects. This includes (i) the minimum size of objects distinguishable on the land surface (spatial resolution), (ii) the number of electromagnetic spectrum bands sensed (spectral resolution) and (iii) the intervals between imagery acquisitions (temporal resolution). A comprehensive understanding of these features for past, present and future sensors used for vegetation and crop detection is essential to develop algorithms and applications. National to regional crop-type identification has been carried out in literatures using freely available EO data, such as from Landsat, MODIS and recently launched Sentinel (https://sentinel.esa.int/web/sentinel/home).

The first Landsat satellite was launched in 1972 and had a on-board multispectral scanner (MSS) for collecting land information at a landscape scale. Since then, Landsat has initiated a series of missions that carried the enhanced thematic mapper plus (ETM+), capable of capturing data at eight spectral bands from visible to short-waved infrared at a spatial resolution of predominately 30 m. The series has enabled continuous global coverage since the early 1990s (https://landsat.gsfc.nasa.gov/). The series is still in operation with the most recent Landsat 8 satellite launched in 2018, which covers the same ground track repeatedly every 16 days. Landsat data have served a key role in generating mid-resolution crop type and production prediction analysis. For instance, the worldwide cropland maps developed in Teluguntla et al. (2018b) were built primarily from the Landsat satellite imagery, which is the highest spatial resolution of any global agriculture product. At local to regional scales, the no-cost Landsat imaging has also been implemented for characterizing crop phenology and generating detailed crop maps (Misra and Wheeler 1978; Tatsumi et al. 2015). The key limitation in the use of Landsat data mainly relates to the limited data available for tracking crop growth and the difficulties in discriminating specific crop types (Wulder et al. 2016).

Although efficient land-cover classification approaches were developed and refined for mapping based on Landsat imagery, it was the increase in frequency of image acquisition that provided unprecedented insights about the landscape vegetation, with, e.g., MODIS. MODIS is currently one of the most popular multispectral imaging devices. It was first launched in 1999 by NASS and has devices recording 36 spectral bands, seven (blue, green, red, two NIR bands and two SWIR bands) of which are designed for vegetation-related studies. MODIS provides daily global coverage at spatial resolutions from 250 m to 1000 km, as well as 8-day and 16-day composite vegetation indices products. Frequently acquired imageries from this sensor provide the comprehensive views of the vegetation across large areas. Analysis of the long time series of MODIS, along with the improving understanding of the relations between reflectance and canopy features, makes the study of vegetation dynamics on large scales possible. For instance, the multi-year MODIS time series was implemented to characterize cropland phenology and to generate a global cropland extent using a global classification decision tree algorithm (Pittman et al. 2010). Similarly at regional scale, in Australia winter crop maps have been produced annually using crop phenology information derived from a time series of MODIS data (Potgieter et al. 2007, 2010). A major limitation in MODIS image-based study is its relatively coarse spatial resolution (250 m), which cannot provide information at a small scale (e.g. field and within field).

More recently, Sentinel-2 has served as an alternative source of data for crop researchers. Sentinel-2A was launched in 2015 and provides wide-area images with resolutions of 10 m (visible and near-infrared, or NIR), 20 m (red-edge, NIR and short-wave infrared, or SWIR) and 60 m (visible to SWIR for atmospheric applications) and a 10-day revisit frequency. The identical Sentinel-2B satellite was launched in 2017, which increased Sentinel-2’s revisit capacity to five days. Sentinel-2 imagery has been available free since October 2015. With the high level of spatial and temporal detail, as well as the complementarity of the spectral bands, Sentinel-2 has enabled tracking the progressive growth of crops and identifying the most relevant crop-specific growth variables (e.g. crop biophysical parameters, start of season, greening up rate, etc.).

In addition, small and relatively inexpensive to build satellites known as CubeSats have increased over the last decade (McCabe et al. 2017). These new satellites, such as Planet Labs (‘Planet’) PlanetScope and Skybox imaging SkySat satellites, enables the creation of a collection of both high spatial (<5 m) and temporal resolution (< 1 week) imagery at lower cost (Dash and Ogutu 2016; Jain et al. 2016). These satellites have the potential to monitor and detect rapidly changing environments on the Earth surface (McCabe et al. 2017) and to obtain multiple measures of the same field, which is useful to detect sowing and harvesting dates (Sadeh et al. 2019)or to monitor crops over the growing season (Jain et al. 2016; Sadeh et al. 2021).

The development of EO systems over the past century has provided landscape studies with a wealth of images and has become a significant data source for measuring and understanding landscape objects like crops. Users employing these EO data from a given instrument would like to be sure they can continue to rely on its availability further into the future. Fortunately, current satellite platform operational agencies are planning succession missions to ensure comparable data products remain available beyond the life of current mission. For example, the Landsat 9 is slated to launch in 2021 to continue the legacy of the Landsat program. The Landsat 9 will detect the same range in intensity as Landsat 8 and will be placed in an orbit that it is eight days out of phase with Landsat 8 to increase temporal coverage of observations (https://landsat.gsfc.nasa.gov/landsat-9/). Similarly, the visible infrared imaging radiometer suite (VIIRS), on-board the Suomi National Polar-orbiting Partnership platform (Suomi NPP), was launched in October 2011 to extend and improve the measurements of its predecessors including AVHRR and MODIS (https://ladsweb.modaps.eosdis.nasa.gov/). The twin satellites of Sentinel-2 (A/B) with a 7-year lifetime design is also planned to be replaced in 2022–23 time frame by new identical missions taking the data record to the 2030 time frame (Pahlevan et al. 2017).

3. DETERMINING CROPPING AREA, CROP TYPES AND OTHER CROP ATTRIBUTES FROM RS

3.1 Using single-date satellite imagery to derive cropping area and crop types

Since the early 80s, single-date imagery has been used for crop detection from RS (Macdonald 1984). Approaches based on single-date data typically target imagery around the maximum vegetative growth period to discriminate between all cropping areas and non-cropping land use areas (Davidson et al. 2017). In situations where cloud-free images are rare during the vegetation period, especially in regions close to the equatorial with few clear days, single-date image approaches can be useful (Matvienko et al. 2020). Such approaches require less time and cost (Langley et al. 2001), and receive moderate accuracy in some local scale studies (Table 1).

Table 1.

List of studies using single-date RS date for crop classification.

Targeted cropsSensor, temporalityAlgorithmsAccuracy Reference
Maize, cotton, grass and other intercropsSentinel-2B, image acquisition date centred on 1/1/2017 near peak greennessK-nearest-neighbours, RF0 and Gradient boosting applied to all 13 spectral bands and vegetation indices (NDVI1, EVI2, MSAVI3 and NDRE4)Best result is achieved for gradient boosting with overall accuracy of 77.4 %Matvienko et al. (2020)
Wheat, sugarcane, fodder and other cropsSentinel-2, image acquired in the growing seasonRF and SVM5 methods applied to stacked blue, green, red and NIR bandsRF and SVM received 84.22 % and 81.85 % overall accuracy, respectivelySaini and Ghosh (2018)
Cotton, safflower, tomato, winter wheat, durum wheatRapidEye, acquired in middle of the growing season (mid-July)Softmax regression, SVM, a one-layer NN6 and a CNN7 applied to the five spectral bands of RapidEyeAll algorithms did well and received accuracies around 85 %Rustowicz (2017)
Sunflower, olive, winter cereal stubble and other land covers QuickBird, acquired on 10/7/2004Parallelepiped, Minimum Distance, Mahalanobis Classifier Distance, Spectral Angle Mapper and Maximum Likelihood applied to pan-sharpened multispectral bands of RapidEyeObject-based classification outperformed pixel-based classifications. Maximum likelihood received best overall accuracy of 94 %.Castillejo-González et al. (2009)
Corn, rice, peanut, sweet potato, soya beanWorldView-2, acquired on 3/10/2014SVM applied to the eight spectral bands, plus the NDVI and texturesTextures marginally increased overall classification accuracy to 95 %Wan and Chang (2019)
Rice, maize, sorghum and soybeansLandsat 7, all imagery during the summer growing season were acquired Maximum likelihood with no probability threshold applied to all single-date images (to determine the best identification window using single image for different crops)For example, for sorghum, the primary window for identifying the crop was from mid-April to May, with producer’s accuracy > 75 % and user’s accuracy > 60.Van Niel and McVicar (2004)
Corn, cotton, sorghum and sugarcaneSpot-5 10-m image, acquired in the middle of growing season for most cropsMinimum Distance, Mahalanobis Distance, Maximum Likelihood, Spectral Angle Mapper and SVM were applied to the four original 10-m resolution band and the simulated 20 m and 30 m bands (for checking the impacts of pixel sizes).Maximum likelihood classification with 10-m spectral bands received best overall accuracy >87 % across the sites. Increase in pixel size did not significantly affect accuracy.Yang et al. (2011)
Soybean, canola, wheat, oat and barleyASD handheld spectroradiometer Using discriminant function analysis to classify collected reflectance data into groups The crops can be effectively distinguished using bands in visual and NIR bands with single-date data collected 75–79 days after plantingWilson et al. (2014)
Maize, carrots, sunflower, sugar beet and soyaSentinel-2Used 10 m blue, green, red and NIR for segmentation and all bands (except atmospheric) for classification using RF algorithmReceived overall accuracy 76 % for crops; red-edge and short-wave infrared bands are critical for vegetation mappingImmitzer et al. (2016)
Targeted cropsSensor, temporalityAlgorithmsAccuracy Reference
Maize, cotton, grass and other intercropsSentinel-2B, image acquisition date centred on 1/1/2017 near peak greennessK-nearest-neighbours, RF0 and Gradient boosting applied to all 13 spectral bands and vegetation indices (NDVI1, EVI2, MSAVI3 and NDRE4)Best result is achieved for gradient boosting with overall accuracy of 77.4 %Matvienko et al. (2020)
Wheat, sugarcane, fodder and other cropsSentinel-2, image acquired in the growing seasonRF and SVM5 methods applied to stacked blue, green, red and NIR bandsRF and SVM received 84.22 % and 81.85 % overall accuracy, respectivelySaini and Ghosh (2018)
Cotton, safflower, tomato, winter wheat, durum wheatRapidEye, acquired in middle of the growing season (mid-July)Softmax regression, SVM, a one-layer NN6 and a CNN7 applied to the five spectral bands of RapidEyeAll algorithms did well and received accuracies around 85 %Rustowicz (2017)
Sunflower, olive, winter cereal stubble and other land covers QuickBird, acquired on 10/7/2004Parallelepiped, Minimum Distance, Mahalanobis Classifier Distance, Spectral Angle Mapper and Maximum Likelihood applied to pan-sharpened multispectral bands of RapidEyeObject-based classification outperformed pixel-based classifications. Maximum likelihood received best overall accuracy of 94 %.Castillejo-González et al. (2009)
Corn, rice, peanut, sweet potato, soya beanWorldView-2, acquired on 3/10/2014SVM applied to the eight spectral bands, plus the NDVI and texturesTextures marginally increased overall classification accuracy to 95 %Wan and Chang (2019)
Rice, maize, sorghum and soybeansLandsat 7, all imagery during the summer growing season were acquired Maximum likelihood with no probability threshold applied to all single-date images (to determine the best identification window using single image for different crops)For example, for sorghum, the primary window for identifying the crop was from mid-April to May, with producer’s accuracy > 75 % and user’s accuracy > 60.Van Niel and McVicar (2004)
Corn, cotton, sorghum and sugarcaneSpot-5 10-m image, acquired in the middle of growing season for most cropsMinimum Distance, Mahalanobis Distance, Maximum Likelihood, Spectral Angle Mapper and SVM were applied to the four original 10-m resolution band and the simulated 20 m and 30 m bands (for checking the impacts of pixel sizes).Maximum likelihood classification with 10-m spectral bands received best overall accuracy >87 % across the sites. Increase in pixel size did not significantly affect accuracy.Yang et al. (2011)
Soybean, canola, wheat, oat and barleyASD handheld spectroradiometer Using discriminant function analysis to classify collected reflectance data into groups The crops can be effectively distinguished using bands in visual and NIR bands with single-date data collected 75–79 days after plantingWilson et al. (2014)
Maize, carrots, sunflower, sugar beet and soyaSentinel-2Used 10 m blue, green, red and NIR for segmentation and all bands (except atmospheric) for classification using RF algorithmReceived overall accuracy 76 % for crops; red-edge and short-wave infrared bands are critical for vegetation mappingImmitzer et al. (2016)
Table 1.

List of studies using single-date RS date for crop classification.

Targeted cropsSensor, temporalityAlgorithmsAccuracy Reference
Maize, cotton, grass and other intercropsSentinel-2B, image acquisition date centred on 1/1/2017 near peak greennessK-nearest-neighbours, RF0 and Gradient boosting applied to all 13 spectral bands and vegetation indices (NDVI1, EVI2, MSAVI3 and NDRE4)Best result is achieved for gradient boosting with overall accuracy of 77.4 %Matvienko et al. (2020)
Wheat, sugarcane, fodder and other cropsSentinel-2, image acquired in the growing seasonRF and SVM5 methods applied to stacked blue, green, red and NIR bandsRF and SVM received 84.22 % and 81.85 % overall accuracy, respectivelySaini and Ghosh (2018)
Cotton, safflower, tomato, winter wheat, durum wheatRapidEye, acquired in middle of the growing season (mid-July)Softmax regression, SVM, a one-layer NN6 and a CNN7 applied to the five spectral bands of RapidEyeAll algorithms did well and received accuracies around 85 %Rustowicz (2017)
Sunflower, olive, winter cereal stubble and other land covers QuickBird, acquired on 10/7/2004Parallelepiped, Minimum Distance, Mahalanobis Classifier Distance, Spectral Angle Mapper and Maximum Likelihood applied to pan-sharpened multispectral bands of RapidEyeObject-based classification outperformed pixel-based classifications. Maximum likelihood received best overall accuracy of 94 %.Castillejo-González et al. (2009)
Corn, rice, peanut, sweet potato, soya beanWorldView-2, acquired on 3/10/2014SVM applied to the eight spectral bands, plus the NDVI and texturesTextures marginally increased overall classification accuracy to 95 %Wan and Chang (2019)
Rice, maize, sorghum and soybeansLandsat 7, all imagery during the summer growing season were acquired Maximum likelihood with no probability threshold applied to all single-date images (to determine the best identification window using single image for different crops)For example, for sorghum, the primary window for identifying the crop was from mid-April to May, with producer’s accuracy > 75 % and user’s accuracy > 60.Van Niel and McVicar (2004)
Corn, cotton, sorghum and sugarcaneSpot-5 10-m image, acquired in the middle of growing season for most cropsMinimum Distance, Mahalanobis Distance, Maximum Likelihood, Spectral Angle Mapper and SVM were applied to the four original 10-m resolution band and the simulated 20 m and 30 m bands (for checking the impacts of pixel sizes).Maximum likelihood classification with 10-m spectral bands received best overall accuracy >87 % across the sites. Increase in pixel size did not significantly affect accuracy.Yang et al. (2011)
Soybean, canola, wheat, oat and barleyASD handheld spectroradiometer Using discriminant function analysis to classify collected reflectance data into groups The crops can be effectively distinguished using bands in visual and NIR bands with single-date data collected 75–79 days after plantingWilson et al. (2014)
Maize, carrots, sunflower, sugar beet and soyaSentinel-2Used 10 m blue, green, red and NIR for segmentation and all bands (except atmospheric) for classification using RF algorithmReceived overall accuracy 76 % for crops; red-edge and short-wave infrared bands are critical for vegetation mappingImmitzer et al. (2016)
Targeted cropsSensor, temporalityAlgorithmsAccuracy Reference
Maize, cotton, grass and other intercropsSentinel-2B, image acquisition date centred on 1/1/2017 near peak greennessK-nearest-neighbours, RF0 and Gradient boosting applied to all 13 spectral bands and vegetation indices (NDVI1, EVI2, MSAVI3 and NDRE4)Best result is achieved for gradient boosting with overall accuracy of 77.4 %Matvienko et al. (2020)
Wheat, sugarcane, fodder and other cropsSentinel-2, image acquired in the growing seasonRF and SVM5 methods applied to stacked blue, green, red and NIR bandsRF and SVM received 84.22 % and 81.85 % overall accuracy, respectivelySaini and Ghosh (2018)
Cotton, safflower, tomato, winter wheat, durum wheatRapidEye, acquired in middle of the growing season (mid-July)Softmax regression, SVM, a one-layer NN6 and a CNN7 applied to the five spectral bands of RapidEyeAll algorithms did well and received accuracies around 85 %Rustowicz (2017)
Sunflower, olive, winter cereal stubble and other land covers QuickBird, acquired on 10/7/2004Parallelepiped, Minimum Distance, Mahalanobis Classifier Distance, Spectral Angle Mapper and Maximum Likelihood applied to pan-sharpened multispectral bands of RapidEyeObject-based classification outperformed pixel-based classifications. Maximum likelihood received best overall accuracy of 94 %.Castillejo-González et al. (2009)
Corn, rice, peanut, sweet potato, soya beanWorldView-2, acquired on 3/10/2014SVM applied to the eight spectral bands, plus the NDVI and texturesTextures marginally increased overall classification accuracy to 95 %Wan and Chang (2019)
Rice, maize, sorghum and soybeansLandsat 7, all imagery during the summer growing season were acquired Maximum likelihood with no probability threshold applied to all single-date images (to determine the best identification window using single image for different crops)For example, for sorghum, the primary window for identifying the crop was from mid-April to May, with producer’s accuracy > 75 % and user’s accuracy > 60.Van Niel and McVicar (2004)
Corn, cotton, sorghum and sugarcaneSpot-5 10-m image, acquired in the middle of growing season for most cropsMinimum Distance, Mahalanobis Distance, Maximum Likelihood, Spectral Angle Mapper and SVM were applied to the four original 10-m resolution band and the simulated 20 m and 30 m bands (for checking the impacts of pixel sizes).Maximum likelihood classification with 10-m spectral bands received best overall accuracy >87 % across the sites. Increase in pixel size did not significantly affect accuracy.Yang et al. (2011)
Soybean, canola, wheat, oat and barleyASD handheld spectroradiometer Using discriminant function analysis to classify collected reflectance data into groups The crops can be effectively distinguished using bands in visual and NIR bands with single-date data collected 75–79 days after plantingWilson et al. (2014)
Maize, carrots, sunflower, sugar beet and soyaSentinel-2Used 10 m blue, green, red and NIR for segmentation and all bands (except atmospheric) for classification using RF algorithmReceived overall accuracy 76 % for crops; red-edge and short-wave infrared bands are critical for vegetation mappingImmitzer et al. (2016)

Single-date imagery has been used for crop-type classification (Table 1). In Saini and Ghosh (2018), NIR, red, green and blue band reflectance from a single-date Sentinel-2 (S2) imagery was adopted for crop classification within a 1000 km2 region using both support vector machine (SVM) and random forest (RF) methods. The study classified fields with a moderate accuracy (80 %) for wheat crops, with wheat pixels misclassified as fodder due to the spectral similarities. In Matvienko et al. (2020), multispectral band reflectance from single S2 images acquired around peak greenness of crops in South Africa, along with S2 band-derived vegetation indices were input into both ML (i.e. k-nearest-neighbours, RF, gradient boosting) classifiers and artificial neural networks (ANNs; i.e. U-Net and SE-blocks) for crop classification tests. The work focused on improving overall classification accuracy through a pixel aggregation strategy based on Bayesian theorem, i.e. assign prior information to pixels from a particular field that belong to the same class. These results indicated that classical ML classifiers are sufficient for crop classification for a single satellite image and the Bayesian aggregation approach successfully improved crop classification at the field scale. Similarly, crop-type identification (i.e. cotton, safflower, tomato, winter wheat and durum wheat) applying SVM and neural network approaches on mono-temporal RapidEye data had comparable accuracy of 86 % and 92 %, respectively (Rustowicz 2017).

Crop classification using single-date images is also common in the application of very high-resolution imagery. Such images are often expensive and difficult to acquire at frequent observation periods. Castillejo-González et al. (2009) used the 2.8-m QuickBird imagery to identify crops (including sunflower, olive, orchards) and other agricultural land use types. The classification with multiple classifiers (Minimum Distance, Mahalanobis Classifier Distance, Maximum Likelihood and Spectral Angle Mapper) revealed that this single-date image could efficiently identify crops and agro-environmental measures in a typical agricultural Mediterranean area characterized by dry conditions (Castillejo-González et al. 2009). In Wan and Chang (2019), the 2-m WorldView-2 multispectral and single-date data were also tested for discriminating among crops including corn, paddy rice, soybean and other vegetable crops in a small experiment site (53 ha). The rich spectral information (eight spectral bands), along with NDVI and texture information were input into a SVM classifier that achieved overall accuracy >94 %.

Determining crop type from single-date images remains a challenge mainly due to the complex reflectance signals within and between fields (Saini and Ghosh 2018). The main concept behind crop classification with satellite imagery is that different crop types would present different spectral signatures. However, it is often found that different crop types exhibit similar responses at a particular phenological stage. In addition, the much wider bandwidths around each centre wavelength in some satellites (e.g. Landsat), results in increased spectral confusion and reduced separability between crop types (Palchowdhuri et al. 2018). To address these shortcomings, two major practices have been implemented. In the first approach, the image is segmented before an object-based classification is introduced, which significantly decreases salt-and-pepper noises that are commonly seen in pixel-based classification efforts (Castillejo-González et al. 2009; Peña-Barragán et al. 2011; Li et al. 2015). In addition, incorporating texture-based information tend to substantially improve the classification (Puissant et al. 2005; Wan and Chang 2019). However, due to different sowing dates and environments, crop phenology can differ at any single point in time. In the second approach images are aligned around specific physiological (flowering) or morphological (maximum canopy) stages and can improve crop separability. For example, the highest overall single-date classification accuracy for summer crops in south-eastern Australia was established to occur late February to mid-March (Van Niel and McVicar (2004). By contrast, the optimal timings to classify individual summer crop were different and may vary across seasons. For example, in Australian environments, sorghum can be distinguished from other summer crops most effectively from early April until at least early May. Similarly winter cereals (including wheat, barley and other winter crops in the study) presented high classification accuracy from July to August in Brazil (Lira Melo de Oliveira Santos et al. 2019). In North-eastern Ontario, the best acquisition time of satellite data for separating canola, winter wheat and barley was ~75–79 days after planting (Wilson et al. 2014).

3.2 Using time-sequential imagery to derive crop attributes

Various approaches making use of multi-date imagery have been developed to harness all the information captured from RS throughout the crop growth period (Table 2). A simple strategy for discriminating different crop types using multi-date imagery is to combine image data from various dates to form a single multi-date image prior to classification (Van Niel and McVicar 2004). Similar strategies directly using available RS observations (normally in terms of vegetation indices) at different crop development stages as independent variables and input into different classification algorithms for mapping crop distributions (dependent variable) have been widely adopted in literature (Wang et al. 2014). The direct use of original vegetation index values works well in studies with clear observations at multiple periods, especially during key growth stages (e.g. around flowering and peak greenness). However, in these studies the sequential relationships of multi-date values were ignored and therefore their relationships to specific crop growth stages are not considered. Stacking multi-date images into one for classification works most effectively when only limited satellite observations are available, including observations for key growing stages. Increasing redundant and temporal autocorrelated information when adding more dates to a pixel can cause overfitting of the model and reduce discrimination power (Langley et al. 2001; Van Niel and McVicar 2004; Zhu et al. 2017).

Table 2.

List of studies using a time-sequential data approach for crop classification.

Targeted cropsSensor, temporalityAlgorithmsAccuracy Reference
Corn, soybean, spring wheat and winter wheat cross seasons500-m, 8-day MODISSVM applied to the NDVI time series of different crops directlyOverall accuracy >87 % across three consecutive seasonsWang et al. (2014)
Winter wheat, barley, rye, rapeseed, corn and other cropsLandsat 7/8, Sentinel-2A and RapidEye fused NDVI series for two crop seasonsUsing fuzzy c-means clustering progressively cluster the fields into groups and then assign the groups with classes based on expert experienceGood overall accuracy in 2015 (89 %) but low accuracy (77 %) in 2016 due to unfavourable weatherHeupel et al. (2018)
Corn and soybean crops during multiple yearsLandsat TM and ETM+; using band reflectance and calculated EVI from all available imagery in each yearRF algorithm applied to different combination of multi-temporal EVI-derived transitional dates, the band reflectance at the transitional dates and accumulated heat (growing degree day) Different combination of indices received overall accuracy higher than 88 % when using data collected in the same year. Using phenological variables only and applying to different years received accuracy >80 %.Zhong et al. (2014)
Corn, soybean and winter wheatLandsat TM available scenes across the season and 8-day NDVI from MODIS (500 m)Two data sets fused (using ESTARFM) to create an enhanced time series; decision tree method using time-series indices derived from fitted NDVI profilesOverall accuracy 90.87 %; object-based segmentation and then classification reduced in-field heterogeneity and spectral variation and increased accuracyLi et al. (2015)
Rice HJ-1A/B and MODISESTARFM fused time series to derive phenological variables and was used for constructing decision trees for rice fieldsReceived overall accuracy of 93 %. While the detection at large regions underestimated rice areas by 34.53 % compared to national statistics.Singha et al. (2016)
Rice (single-/double-/triple-cropped rice)8-day, 500-m MODIS surface reflectance product from 2000 to 2012Based on the cubic spline-smoothed EVI profile, the regional peak and time of peak were identified to determine single/double/triple rice fieldsThe overall accuracy was >80.6 % across the studied period. While the method overestimated rice area from 1 to 16 % when compares to statistics over the period.Son et al. (2014)
Barley, wheat, oat, oilseed, maize, peas and field beansOne WorldView -3 and two Sentinel-2 images acquired during the major vegetative stageTested RF on three vegetation indices (NDVI, GNDVI, SAVI). And then introduced decision tree to improve the result by using S2 bands to separate spectrally overlapping crops.Using RF alone received overall accuracy of 81 %, and introducing the decision tree modeller increased the accuracy to 91 %Palchowdhuri et al. (2018)
Wheat, maize, rice and moreAvailable Sentinel-2 imagesTime-weighted dynamic time warping (TWDTW) analysis to the Sentinel-2 NDVI series, and results are compared to RF outputs.Object-based TWDTW performed best and received overall accuracy from 78 to 96 % across three diversified regions. Region has higher complex temporal patterns showed relative lower accuracyBelgiu and Csillik (2018)
cereals, canola, potato, sugar beet, and maize across two seasonsAll dual-polarized Sentinel-1 images covering the two crop seasonsDefined six phenological stages for each crop and classify the crops according to their phenological sequence patterns.The phenological sequence pattern-based classification received overall better results than RF and maximum likelihood; and it is resilient to phenological differences due to farming managementBargiel (2017)
Targeted cropsSensor, temporalityAlgorithmsAccuracy Reference
Corn, soybean, spring wheat and winter wheat cross seasons500-m, 8-day MODISSVM applied to the NDVI time series of different crops directlyOverall accuracy >87 % across three consecutive seasonsWang et al. (2014)
Winter wheat, barley, rye, rapeseed, corn and other cropsLandsat 7/8, Sentinel-2A and RapidEye fused NDVI series for two crop seasonsUsing fuzzy c-means clustering progressively cluster the fields into groups and then assign the groups with classes based on expert experienceGood overall accuracy in 2015 (89 %) but low accuracy (77 %) in 2016 due to unfavourable weatherHeupel et al. (2018)
Corn and soybean crops during multiple yearsLandsat TM and ETM+; using band reflectance and calculated EVI from all available imagery in each yearRF algorithm applied to different combination of multi-temporal EVI-derived transitional dates, the band reflectance at the transitional dates and accumulated heat (growing degree day) Different combination of indices received overall accuracy higher than 88 % when using data collected in the same year. Using phenological variables only and applying to different years received accuracy >80 %.Zhong et al. (2014)
Corn, soybean and winter wheatLandsat TM available scenes across the season and 8-day NDVI from MODIS (500 m)Two data sets fused (using ESTARFM) to create an enhanced time series; decision tree method using time-series indices derived from fitted NDVI profilesOverall accuracy 90.87 %; object-based segmentation and then classification reduced in-field heterogeneity and spectral variation and increased accuracyLi et al. (2015)
Rice HJ-1A/B and MODISESTARFM fused time series to derive phenological variables and was used for constructing decision trees for rice fieldsReceived overall accuracy of 93 %. While the detection at large regions underestimated rice areas by 34.53 % compared to national statistics.Singha et al. (2016)
Rice (single-/double-/triple-cropped rice)8-day, 500-m MODIS surface reflectance product from 2000 to 2012Based on the cubic spline-smoothed EVI profile, the regional peak and time of peak were identified to determine single/double/triple rice fieldsThe overall accuracy was >80.6 % across the studied period. While the method overestimated rice area from 1 to 16 % when compares to statistics over the period.Son et al. (2014)
Barley, wheat, oat, oilseed, maize, peas and field beansOne WorldView -3 and two Sentinel-2 images acquired during the major vegetative stageTested RF on three vegetation indices (NDVI, GNDVI, SAVI). And then introduced decision tree to improve the result by using S2 bands to separate spectrally overlapping crops.Using RF alone received overall accuracy of 81 %, and introducing the decision tree modeller increased the accuracy to 91 %Palchowdhuri et al. (2018)
Wheat, maize, rice and moreAvailable Sentinel-2 imagesTime-weighted dynamic time warping (TWDTW) analysis to the Sentinel-2 NDVI series, and results are compared to RF outputs.Object-based TWDTW performed best and received overall accuracy from 78 to 96 % across three diversified regions. Region has higher complex temporal patterns showed relative lower accuracyBelgiu and Csillik (2018)
cereals, canola, potato, sugar beet, and maize across two seasonsAll dual-polarized Sentinel-1 images covering the two crop seasonsDefined six phenological stages for each crop and classify the crops according to their phenological sequence patterns.The phenological sequence pattern-based classification received overall better results than RF and maximum likelihood; and it is resilient to phenological differences due to farming managementBargiel (2017)
Table 2.

List of studies using a time-sequential data approach for crop classification.

Targeted cropsSensor, temporalityAlgorithmsAccuracy Reference
Corn, soybean, spring wheat and winter wheat cross seasons500-m, 8-day MODISSVM applied to the NDVI time series of different crops directlyOverall accuracy >87 % across three consecutive seasonsWang et al. (2014)
Winter wheat, barley, rye, rapeseed, corn and other cropsLandsat 7/8, Sentinel-2A and RapidEye fused NDVI series for two crop seasonsUsing fuzzy c-means clustering progressively cluster the fields into groups and then assign the groups with classes based on expert experienceGood overall accuracy in 2015 (89 %) but low accuracy (77 %) in 2016 due to unfavourable weatherHeupel et al. (2018)
Corn and soybean crops during multiple yearsLandsat TM and ETM+; using band reflectance and calculated EVI from all available imagery in each yearRF algorithm applied to different combination of multi-temporal EVI-derived transitional dates, the band reflectance at the transitional dates and accumulated heat (growing degree day) Different combination of indices received overall accuracy higher than 88 % when using data collected in the same year. Using phenological variables only and applying to different years received accuracy >80 %.Zhong et al. (2014)
Corn, soybean and winter wheatLandsat TM available scenes across the season and 8-day NDVI from MODIS (500 m)Two data sets fused (using ESTARFM) to create an enhanced time series; decision tree method using time-series indices derived from fitted NDVI profilesOverall accuracy 90.87 %; object-based segmentation and then classification reduced in-field heterogeneity and spectral variation and increased accuracyLi et al. (2015)
Rice HJ-1A/B and MODISESTARFM fused time series to derive phenological variables and was used for constructing decision trees for rice fieldsReceived overall accuracy of 93 %. While the detection at large regions underestimated rice areas by 34.53 % compared to national statistics.Singha et al. (2016)
Rice (single-/double-/triple-cropped rice)8-day, 500-m MODIS surface reflectance product from 2000 to 2012Based on the cubic spline-smoothed EVI profile, the regional peak and time of peak were identified to determine single/double/triple rice fieldsThe overall accuracy was >80.6 % across the studied period. While the method overestimated rice area from 1 to 16 % when compares to statistics over the period.Son et al. (2014)
Barley, wheat, oat, oilseed, maize, peas and field beansOne WorldView -3 and two Sentinel-2 images acquired during the major vegetative stageTested RF on three vegetation indices (NDVI, GNDVI, SAVI). And then introduced decision tree to improve the result by using S2 bands to separate spectrally overlapping crops.Using RF alone received overall accuracy of 81 %, and introducing the decision tree modeller increased the accuracy to 91 %Palchowdhuri et al. (2018)
Wheat, maize, rice and moreAvailable Sentinel-2 imagesTime-weighted dynamic time warping (TWDTW) analysis to the Sentinel-2 NDVI series, and results are compared to RF outputs.Object-based TWDTW performed best and received overall accuracy from 78 to 96 % across three diversified regions. Region has higher complex temporal patterns showed relative lower accuracyBelgiu and Csillik (2018)
cereals, canola, potato, sugar beet, and maize across two seasonsAll dual-polarized Sentinel-1 images covering the two crop seasonsDefined six phenological stages for each crop and classify the crops according to their phenological sequence patterns.The phenological sequence pattern-based classification received overall better results than RF and maximum likelihood; and it is resilient to phenological differences due to farming managementBargiel (2017)
Targeted cropsSensor, temporalityAlgorithmsAccuracy Reference
Corn, soybean, spring wheat and winter wheat cross seasons500-m, 8-day MODISSVM applied to the NDVI time series of different crops directlyOverall accuracy >87 % across three consecutive seasonsWang et al. (2014)
Winter wheat, barley, rye, rapeseed, corn and other cropsLandsat 7/8, Sentinel-2A and RapidEye fused NDVI series for two crop seasonsUsing fuzzy c-means clustering progressively cluster the fields into groups and then assign the groups with classes based on expert experienceGood overall accuracy in 2015 (89 %) but low accuracy (77 %) in 2016 due to unfavourable weatherHeupel et al. (2018)
Corn and soybean crops during multiple yearsLandsat TM and ETM+; using band reflectance and calculated EVI from all available imagery in each yearRF algorithm applied to different combination of multi-temporal EVI-derived transitional dates, the band reflectance at the transitional dates and accumulated heat (growing degree day) Different combination of indices received overall accuracy higher than 88 % when using data collected in the same year. Using phenological variables only and applying to different years received accuracy >80 %.Zhong et al. (2014)
Corn, soybean and winter wheatLandsat TM available scenes across the season and 8-day NDVI from MODIS (500 m)Two data sets fused (using ESTARFM) to create an enhanced time series; decision tree method using time-series indices derived from fitted NDVI profilesOverall accuracy 90.87 %; object-based segmentation and then classification reduced in-field heterogeneity and spectral variation and increased accuracyLi et al. (2015)
Rice HJ-1A/B and MODISESTARFM fused time series to derive phenological variables and was used for constructing decision trees for rice fieldsReceived overall accuracy of 93 %. While the detection at large regions underestimated rice areas by 34.53 % compared to national statistics.Singha et al. (2016)
Rice (single-/double-/triple-cropped rice)8-day, 500-m MODIS surface reflectance product from 2000 to 2012Based on the cubic spline-smoothed EVI profile, the regional peak and time of peak were identified to determine single/double/triple rice fieldsThe overall accuracy was >80.6 % across the studied period. While the method overestimated rice area from 1 to 16 % when compares to statistics over the period.Son et al. (2014)
Barley, wheat, oat, oilseed, maize, peas and field beansOne WorldView -3 and two Sentinel-2 images acquired during the major vegetative stageTested RF on three vegetation indices (NDVI, GNDVI, SAVI). And then introduced decision tree to improve the result by using S2 bands to separate spectrally overlapping crops.Using RF alone received overall accuracy of 81 %, and introducing the decision tree modeller increased the accuracy to 91 %Palchowdhuri et al. (2018)
Wheat, maize, rice and moreAvailable Sentinel-2 imagesTime-weighted dynamic time warping (TWDTW) analysis to the Sentinel-2 NDVI series, and results are compared to RF outputs.Object-based TWDTW performed best and received overall accuracy from 78 to 96 % across three diversified regions. Region has higher complex temporal patterns showed relative lower accuracyBelgiu and Csillik (2018)
cereals, canola, potato, sugar beet, and maize across two seasonsAll dual-polarized Sentinel-1 images covering the two crop seasonsDefined six phenological stages for each crop and classify the crops according to their phenological sequence patterns.The phenological sequence pattern-based classification received overall better results than RF and maximum likelihood; and it is resilient to phenological differences due to farming managementBargiel (2017)

With the increased availability of more frequent observations, a more efficient way of using time-series data for deriving phenological indicators is increasingly adopted for regional crop mappings. This allows the creation of a time series that is representative of the crop growth dynamics at a pixel scale. Applications of traditional time-series approaches include harmonic analysis or wavelet (Verhoef et al. 1996; Sakamoto et al. 2005; Potgieter et al. 2011, 2013). This produces crop-related derivate metrics such as start of season, the peak greenness, and end of season and area under the curve. Crop classification using these derived features are reported to have improved classification accuracy compared to using original multi-date vegetation index values (Simonneaux et al. 2008). Extraction of phenological features is most efficient with frequently acquired and evenly spaced RS data from across the entire crop growth period. For example, daily MODIS observations and associated 8-day and 16-day composite products are frequently utilized to analyse changes in crop phenology and discriminate crops and vegetation types at regional and national scales (Wardlow and Egbert 2008; Wang et al. 2014). However, the method is sensitive to missing data and the presence of cloud/shadow pixels. A more complex, but more robust and widely used, feature extraction approach is based on curve fitting of the time-series vegetation index. Here, a continuous time series of the targeted vegetation index is generated by fitting a pre-defined function to fill in missing data due to cloud/shadow cover. A range of mathematical functions have been applied to fit the time series of the vegetation index, including linear regression (Roerink et al. 2000), logistic (Zhang et al. 2003), Gaussian (Potgieter et al. 2013), Fourier (Roerink et al. 2000), and Savitzky-Golary filter (Chen et al. 2004). A detailed comparison of curve fitting algorithms was made by (Zeng et al. 2020). These approaches have been intensively applied to discriminate between crop types across regions. Numerous studies have demonstrated that the intra-class (between crop classes) confusion can be effectively reduced by harnessing the phenological similarities between crop species within the same geographical region (Sakamoto et al. 2005; Potgieter et al. 2013).

3.3 Multispectral and hyperspectral data to detect crop phenology and crop type from airborne platforms

Recent advances in sensor technologies, which have become lighter with enhanced resolution (e.g. improved spectral and spatial resolutions of cameras and sensors, and increased accuracies of geographical positioning systems (GPS)), have led to the rapid increase in the use of drones or unmanned aerial vehicles/aerial systems/remotely piloted aircraft system (known as UAV, UAS and RPAS). Unmanned aerial vehicles allow high-throughput phenotyping of targeted traits like head number, stay-green, biomass and leaf area index (LAI) with specifically developed predictive models (Chapman et al. 2014, 2018; Potgieter et al. 2017; Guo et al. 2018).

The use of UAVs in land cover detection has shown promise (Table 3). Images captured with different sensors/cameras including RGB, multispectral, hyperspectral and thermal cameras have been used to monitor crop type, crop canopy and to quantify canopy structural and biophysical parameters (LAI, photosynthetic and non-photosynthetic pigments, transpiration rates, solar-induced fluorescence) (Ahmed et al. 2017; Bohler et al. 2018). Analysis of multispectral (NIR, red, green and blue) and multi-date data on-board UAVs have enabled the classification between cabbage and potato crops (overall accuracy > 98 %) (Kwak and Park 2019). It was also found that using single-date UAV data is less efficient in discriminating different crops due to the heterogeneity captured by the fine resolution pixels. However, the inclusion of additional texture information (e.g. homogeneity, dissimilarity, entropy and angular second moment) in classifying single-date UAV images could significantly increase the classification accuracy (Kwak and Park 2019; Xu et al. 2019). The use of data from optical cameras sensitive to the visible region (RGB) also had reasonable success in crop-type detection. For example, a multi-date NDVI decision tree classifier had a high overall accuracy (99 %) in discrimination 17 different crops in an agronomy research at plot scale (Latif 2019).

Table 3.

List of studies using UAV platform collected data for crop classification.

Targeted cropsSensorsAlgorithmAccuracyReference
Maize, sugar beet, winter wheat, barley and rapeseedCanon RGB and Canon NIRGBRF applied to the texture information extracted from photosBest pixel-based overall accuracy is 66 %, and best object-based accuracy is 86 %Bohler et al. (2018)
Cabbage and potatoMulti-date Canon camera with NIR, red and green bandsRF and SVM applied to multi-data or single-date photo’s bands and extracted textureMulti-date photos received high accuracy (>98 %) and texture contribute few; single-date photos received accuracy around 90 % with significant benefit from texture.Kwak and Park (2019)
Cultivated fields (aggregated crops)Single-date RGB cameraHierarchical classification based on vegetation index and textureOverall accuracy is 86.4 %Xu et al. (2019)
Wheat, barley, oat, clover and other 13 cropsMulti-date photos with green, red and NIR bandsDecision tree applied to the derived NDVI seriesOverall accuracy > 99 % in recognizing 17 crops; and early- to mid-season photos contribute more to classificationLatif (2019)
Corn, wheat, soybean, alfalfaSingle-date RGB and multispectral (G, R, NIR and red-edge)Object-based image classification method implemented and classes were assigned based on their spectral characteristicsMultispectral data received higher accuracy (89 %) than RGB photos (83 %)Ahmed et al. (2017)
Cotton, rape, cabbage, lettuce, carrots and other land coversOne-date hyperspectral image containing 270 spectral bandsProposed a Spectral-spatial fusion based on conditional random fields method (SSF-CRF)SSF-CRF outperformed other classifiers, e.g. RF and received overall accuracy > 98 % at the test sites (400 × 400 pixels in site 1 and 300 × 600 pixels in site 2)Wei et al. (2019)
Cowpea, soybean, sorghum, strawberry, and other land coversHyperspectral image containing 224 bands at resolution of 3.7 m (site 1) and image with 274 channels at 0.1 mSSF-CRFSSF-CRF method received overall accuracy 99 % at site 1 (400 × 400 pixels), and 88 % at site 2 (303 × 1217 pixels)Zhao et al. (2020)
Targeted cropsSensorsAlgorithmAccuracyReference
Maize, sugar beet, winter wheat, barley and rapeseedCanon RGB and Canon NIRGBRF applied to the texture information extracted from photosBest pixel-based overall accuracy is 66 %, and best object-based accuracy is 86 %Bohler et al. (2018)
Cabbage and potatoMulti-date Canon camera with NIR, red and green bandsRF and SVM applied to multi-data or single-date photo’s bands and extracted textureMulti-date photos received high accuracy (>98 %) and texture contribute few; single-date photos received accuracy around 90 % with significant benefit from texture.Kwak and Park (2019)
Cultivated fields (aggregated crops)Single-date RGB cameraHierarchical classification based on vegetation index and textureOverall accuracy is 86.4 %Xu et al. (2019)
Wheat, barley, oat, clover and other 13 cropsMulti-date photos with green, red and NIR bandsDecision tree applied to the derived NDVI seriesOverall accuracy > 99 % in recognizing 17 crops; and early- to mid-season photos contribute more to classificationLatif (2019)
Corn, wheat, soybean, alfalfaSingle-date RGB and multispectral (G, R, NIR and red-edge)Object-based image classification method implemented and classes were assigned based on their spectral characteristicsMultispectral data received higher accuracy (89 %) than RGB photos (83 %)Ahmed et al. (2017)
Cotton, rape, cabbage, lettuce, carrots and other land coversOne-date hyperspectral image containing 270 spectral bandsProposed a Spectral-spatial fusion based on conditional random fields method (SSF-CRF)SSF-CRF outperformed other classifiers, e.g. RF and received overall accuracy > 98 % at the test sites (400 × 400 pixels in site 1 and 300 × 600 pixels in site 2)Wei et al. (2019)
Cowpea, soybean, sorghum, strawberry, and other land coversHyperspectral image containing 224 bands at resolution of 3.7 m (site 1) and image with 274 channels at 0.1 mSSF-CRFSSF-CRF method received overall accuracy 99 % at site 1 (400 × 400 pixels), and 88 % at site 2 (303 × 1217 pixels)Zhao et al. (2020)
Table 3.

List of studies using UAV platform collected data for crop classification.

Targeted cropsSensorsAlgorithmAccuracyReference
Maize, sugar beet, winter wheat, barley and rapeseedCanon RGB and Canon NIRGBRF applied to the texture information extracted from photosBest pixel-based overall accuracy is 66 %, and best object-based accuracy is 86 %Bohler et al. (2018)
Cabbage and potatoMulti-date Canon camera with NIR, red and green bandsRF and SVM applied to multi-data or single-date photo’s bands and extracted textureMulti-date photos received high accuracy (>98 %) and texture contribute few; single-date photos received accuracy around 90 % with significant benefit from texture.Kwak and Park (2019)
Cultivated fields (aggregated crops)Single-date RGB cameraHierarchical classification based on vegetation index and textureOverall accuracy is 86.4 %Xu et al. (2019)
Wheat, barley, oat, clover and other 13 cropsMulti-date photos with green, red and NIR bandsDecision tree applied to the derived NDVI seriesOverall accuracy > 99 % in recognizing 17 crops; and early- to mid-season photos contribute more to classificationLatif (2019)
Corn, wheat, soybean, alfalfaSingle-date RGB and multispectral (G, R, NIR and red-edge)Object-based image classification method implemented and classes were assigned based on their spectral characteristicsMultispectral data received higher accuracy (89 %) than RGB photos (83 %)Ahmed et al. (2017)
Cotton, rape, cabbage, lettuce, carrots and other land coversOne-date hyperspectral image containing 270 spectral bandsProposed a Spectral-spatial fusion based on conditional random fields method (SSF-CRF)SSF-CRF outperformed other classifiers, e.g. RF and received overall accuracy > 98 % at the test sites (400 × 400 pixels in site 1 and 300 × 600 pixels in site 2)Wei et al. (2019)
Cowpea, soybean, sorghum, strawberry, and other land coversHyperspectral image containing 224 bands at resolution of 3.7 m (site 1) and image with 274 channels at 0.1 mSSF-CRFSSF-CRF method received overall accuracy 99 % at site 1 (400 × 400 pixels), and 88 % at site 2 (303 × 1217 pixels)Zhao et al. (2020)
Targeted cropsSensorsAlgorithmAccuracyReference
Maize, sugar beet, winter wheat, barley and rapeseedCanon RGB and Canon NIRGBRF applied to the texture information extracted from photosBest pixel-based overall accuracy is 66 %, and best object-based accuracy is 86 %Bohler et al. (2018)
Cabbage and potatoMulti-date Canon camera with NIR, red and green bandsRF and SVM applied to multi-data or single-date photo’s bands and extracted textureMulti-date photos received high accuracy (>98 %) and texture contribute few; single-date photos received accuracy around 90 % with significant benefit from texture.Kwak and Park (2019)
Cultivated fields (aggregated crops)Single-date RGB cameraHierarchical classification based on vegetation index and textureOverall accuracy is 86.4 %Xu et al. (2019)
Wheat, barley, oat, clover and other 13 cropsMulti-date photos with green, red and NIR bandsDecision tree applied to the derived NDVI seriesOverall accuracy > 99 % in recognizing 17 crops; and early- to mid-season photos contribute more to classificationLatif (2019)
Corn, wheat, soybean, alfalfaSingle-date RGB and multispectral (G, R, NIR and red-edge)Object-based image classification method implemented and classes were assigned based on their spectral characteristicsMultispectral data received higher accuracy (89 %) than RGB photos (83 %)Ahmed et al. (2017)
Cotton, rape, cabbage, lettuce, carrots and other land coversOne-date hyperspectral image containing 270 spectral bandsProposed a Spectral-spatial fusion based on conditional random fields method (SSF-CRF)SSF-CRF outperformed other classifiers, e.g. RF and received overall accuracy > 98 % at the test sites (400 × 400 pixels in site 1 and 300 × 600 pixels in site 2)Wei et al. (2019)
Cowpea, soybean, sorghum, strawberry, and other land coversHyperspectral image containing 224 bands at resolution of 3.7 m (site 1) and image with 274 channels at 0.1 mSSF-CRFSSF-CRF method received overall accuracy 99 % at site 1 (400 × 400 pixels), and 88 % at site 2 (303 × 1217 pixels)Zhao et al. (2020)

The rich spectral information captured from recently developed UAV-borne hyperspectral sensors is an attractive option for detecting crop species. Wei et al. (2019) proposed a ‘Spatial-Spectral Fusion based on Conditional Random Fields’ (SSF-CRF) classification method for harnessing hyperspectral data in classifying crop types. This included deriving the morphology, spatial texture, and mixed pixel decomposition using a CRF mathematical function for each spectral band. This resulted in creating a spectral-spatial feature vector to enhance the spectral changes and heterogeneity within the same feature. Applying the method in two experimental sites (~400 × 400 pixels area) achieved accuracy >97 %, primarily for vegetable crops. Furthermore, out-scaling such an approach allowed distinction between various crop types (soybean, sorghum, strawberry and other land cover types) in larger fields (over 300 × 1200 pixels) and other regions in Hubei province of China, with an overall accuracy of 88 % (Zhao et al. 2020).

Although, airborne platforms are capable of capturing high spatial (<1cm) and spectral resolutions, they do not always result in higher classification accuracies as achieved using single index at lower (e.g. >1 m) resolution across multiple dates (Gamon et al. 2019). For example, Bohler et al. (2018) compared the change in accuracies for resampled data captured on-board UAV into different spatial resolution (from 0.2 to 2 m) and found that the best resolution for discriminating individual crops was around 0.5 m. This was mainly due to the heterogeneous landscape, which resulted in smaller pixel sizes resulting in a decrease of classification accuracies. Thus, having higher resolution can be a disadvantage due to the increase of mixed reflectances from more prominent features, such as soil background, single leaves, shadows from leave. Nevertheless, it is generally accepted that high-resolution hyperspectral imagery opens new opportunities for the independent analysis of individual scene components to properly understand their spectral mixing in heterogeneous crops.

It is anticipated that the ability of capturing high temporal, spatial and spectral resolution reflectance data on board of UAVs will enable further advancements in understanding of physiological and functional crop processes at the canopy level.

3.4 Integrating crop information derived from different RS platforms

Remote sensing-derived information has four attributes: (i) spatial resolution, (ii) temporal resolution, (iii) number of spectral bands and (iv) the band or wavelength width of each band. Currently, a few attempts have been made to integrate information from different sensing platforms mainly through fusion approaches. Fusion happens by incorporating such metrics from two different RS platforms utilizing RF classifier to discriminate crop type and other land use classes (Lobo et al. 1996; Ban 2003; McNairn et al. 2009; Ghazaryan et al. 2018; Lira Melo de Oliveira Santos et al. 2019; Zhang et al. 2019). A large portion of the approaches have used derived RS variables (indices) that highly correlate with known crop morphological and physiological attributes (e.g. LAI, percent cover, green/red chlorophyll, canopy structure or canopy height).

Furthermore, some studies generate a continues time series of enhanced EO data through the integration of RS data that varies both spatially as well as temporally (e.g. MODIS 25 m and Landsat 30 m) (Liu et al. 2014; Li et al. 2015; Sadeh et al. 2020; Waldhoff et al. 2017). Such an approach is known as the spatial-temporal adaptive reflectance fusion model (STARFM). This model was developed by Feng et al. (2006) to first fuse Landsat and MODIS surface reflectance over time (Li et al. 2015; Zhu et al. 2017). Applying STARFM to generate missing dates of high-resolution imagery can significantly decreased the average overall classification errors when compared to classification using Landsat data only (Zhu et al. 2017). However, care should be taken when using such an approach where there is a high frequency of missing data, since it can result in high prediction errors (Zhu et al. 2017).

Efforts have also attempted to fuse high-resolution UAV images (with limited spectral bands) with multispectral satellite images (higher number of spectral bands but with relatively lower spatial resolution) for finer crop classifications (Jenerowicz and Woroszkiewicz 2016; Zhao et al. 2019). Fusion of Sentinel-2 imagery with UAV images through a Gram-Schmidt transformation function resulted in superior accuracy (overall > 88 %) compared to using each of the individual data sets separately (Zhao et al. 2019).

4. LAND USE AND CROP CLASSIFICATION APPROACHES

Various statistical approaches are frequently utilized to classify RS data into different land use or crop-type categories. Such approaches can be divided into supervised, unsupervised or decision tree classification approaches (Campbell 2002). These are usually applied on either the entire time series or a derivative of the time series representing the main aspects of crop growth. However, the advance in computing power, e.g. cloud computing and high-resolution imagery, has also allowed the application of ML and more complex deep learning (DL) approaches.

4.1 Unsupervised and supervised approaches

Unsupervised classification algorithms, e.g. ISODATA and K-means clustering, define natural groupings of the spectral properties at pixel scales and requires no prior knowledge (Xavier et al. 2006). These approaches will attempt to determine a number of groups and assign pixels to groups with no input from users. In contrast, decision tree approaches need to have specific threshold cut-offs that allows for crop-type discrimination (Lira Melo de Oliveira Santos et al. 2019). In unsupervised classifications, the number of classes are usually arbitrarily defined making it a less robust approach than supervised classification (Niel and McVicar 2004; Xavier et al. 2006).

Up to now, most crop classification studies have extensively used supervised classification methods, i.e. data sets are split (75:25) and there is some kind of ‘training’ step, followed by a ‘validation’ step on different samples of the data set. To come to a final model, expertise in designing and setting parameters for the algorithms becomes a part of the input. These include maximum likelihood (MLC), SVMs and RF. The common principle behind these supervised classifiers is that the classifier is trained on field observations (prior knowledge) and then the model is applied to the remainder of imagery pixels either for a few imageries or across all dates.

The MLC is one of the most traditional parametric classifiers and is usually used when data sets are large enough so that the distributions of objects can be assumed approaching Gaussian normal distribution (Benediktsson et al. 1990; Foody et al. 1992). Briefly, MLC assigns each pixel to a class with the highest probability using variance and covariance matrices derived from training data (Erbek et al. 2004). Studies have found that MLC could work effectively with relative low dimensional data and could achieve comparatively fast and robust classification results when fed with sufficient quality data (Waldhoff et al. 2017).

SVM is a non-parametric classifier, which makes no assumptions of the distribution type on the training data (Foody and Mathur 2004; Ghazaryan et al. 2018). Its power lies in the fact that it can work well with higher dimensional features and generalize well even with a small number of training samples (Foody and Mathur 2004; Mountrakis et al. 2011).

Currently, one of the most commonly used approaches is RF, especially for crop discrimination. RF is an ensemble machine learning classifier that aggregates a large number of random decision trees for classification (Breiman 2001). RF can efficiently manage large data sets with thousands of input variables and is more robust in accounting for outliers (Breiman 2001; Li et al. 2015). In addition, RF does not result in overfitting when increasing the number of trees (Ghazaryan et al. 2018). A recent study using a time series of Landsat and Sentinel with a supervised RF two-step approach resulted in moderately acceptable accuracies (>80 %) to discriminate crops from non-crops across the broad land use of China and Australia (Teluguntla et al. 2018a). The comprehensive list of supervised classifiers and their comparisons is available in the review from Lu and Weng (2007). Overall, there is no single best image classification method for detailed crop mapping. Methodologies based on supervised classifiers generally outperform others (Davidson et al. 2017), but require training with ground-based data. SVM, MLC and RF supervised classifiers are also grouped within the suite of first-order ML approaches.

Supervised classification remains the standard approach when higher classification accuracies are needed. However, it requires good quality and a significant high sample of local field observation data for developing the training classifier. This leads to challenges for specific crop-type classification in regions where spatially explicit field level information is unavailable.

4.2 ML and DL techniques

Both ML and DL approaches are subsets of artificial intelligence (AI) techniques (Fig. 4). Machine learning builds models based on the real-world data and the object it needs to predict. Artificial intelligence techniques are typically trained on more and more data over time, which improves the model parameters and increases prediction accuracy. Machine learning approaches includes supervised learning, unsupervised learning and reinforcement learning techniques. Deep learning extends classical ML by introducing more ‘depth’ (complexity) and transforming data across multiple layers of abstraction using different functions to allow data representation hierarchically.

The relationship of AI, ML and DL (Adapted from: www.argility.com).
Figure 4.

The relationship of AI, ML and DL (Adapted from: www.argility.com).

Deep learning techniques have increasingly been used in RS applications due to their mathematical robustness that generally allows more accurate extraction of features within large data sets. Artificial neural networks (ANN), recurrent neural networks (RNNs) and convolutional neural networks (CNNs) represent three major architectures of deep networks. ANN is also known as a Feed-Forward Neural network because the information is processed only in the forward direction. Limited by its design, it cannot handle sequential or spatial information efficiently. Traditionally, a CNN consists of a number of convolutional and subsampling layers optionally followed by fully connected layers. It is designed to automatically and adaptively learn spatial hierarchies of features. Due to the benefit of its architecture, it has become dominant in various computer vision tasks (Yamashita et al. 2018). In agriculture applications, CNN can extract distinguishable features of different crops from remotely sensed imagery in a hierarchical way to classify crop types. The term ‘hierarchical’ indicates the convolution process through either 1D across the spectral dimension, 2D across the spatial dimensions (x/y locations), or 3D across the spectral and the spatial dimensions simultaneously (Zhong et al. 2019). Figure 5 depicts the broad architecture of convolution and image classification steps.

Framework of CNN for RS imagery-based classification. For example, it can use information from soil, climate, terrain and RS indices as inputs and generate a crop-type classification map as the output 3D data cube displayed in R (band 30), G (band 20) and B (band 10). PRISMA satellite image captured on 15th Sept 2020 fro study region in Victoria showing six crops. PRISMA has 66 bands in NVIR channel and 171 bands in SWIR channel, all at 30 m resolution (https://earth.esa.int/web/eoportal/satellite-missions/p/prisma-hyperspectral). Adapted from Liu et al. (2018).
Figure 5.

Framework of CNN for RS imagery-based classification. For example, it can use information from soil, climate, terrain and RS indices as inputs and generate a crop-type classification map as the output 3D data cube displayed in R (band 30), G (band 20) and B (band 10). PRISMA satellite image captured on 15th Sept 2020 fro study region in Victoria showing six crops. PRISMA has 66 bands in NVIR channel and 171 bands in SWIR channel, all at 30 m resolution (https://earth.esa.int/web/eoportal/satellite-missions/p/prisma-hyperspectral). Adapted from Liu et al. (2018).

For example, Hu et al. (2015) developed a 1D CNN architecture containing five layers with weights for supervised hyperspectral image classification for eight different crop types in India, which generated better results than the classical SVM classifiers. A 2D convolution within the spatial domain in general can discriminate crop classes more reliably than 1D CNNs (Kussul et al. 2017). One recent study also developed 3D CNN architectures to characterize the structure of multispectral multi-temporal RS data (Ji et al. 2018). In this study, the proposed 3D CNN framework performed well in training 3D crop samples and learning spatio-temporal discriminative representations. Jin et al. (2018) concluded that 3D CNN is especially suitable for characterizing the dynamics of crop growth and outperformed 2D CNNs and other conventional methods (Ji et al. 2018).

Recurrent neural network is another well-adopted DL architecture for classification. As indicated by its name, RNNs are specialized for sequential data analysis and have been considered as a natural candidate to learn the temporal relationship in time-series image. In Ndikumana et al. (2018), two RNN-based classifiers were implemented to multi-temporal Sentinel-1 data over an area in France for rice classification and received notable high accuracy (F-measure metric of 96 %), thus outperforming classical approaches such as RF and SVM. However, in Zhong et al. (2019), the RNN framework applied to Landsat EVI time series received lower crop classification accuracy (82 %) than results with a 1D CNN framework (86 %), which considered the temporal profiles as spectral sequences.

Overall, NNs have the properties of parallel processing ability, adaptive capability for multispectral images, good generalization, and not requiring any prior knowledge of the probability distribution of the targeted real-world data. This makes them superior to any of the traditional statistical classification methods discussed earlier. However, the data processing demands of NNs are high, and they typically need to be deployed in cloud computing platforms, often using GPU (graphical processing unit) processors in order to provide practical results.

5. DATA DELIVERY PLATFORMS

Several data platforms are currently operational and are used to track and access satellite data over time. Most of these platforms are designed to target crop-type classification applications utilizing statistical and/or ML approaches (Table 4). The platforms mainly have high temporal and spatial resolutions data and therefore have appreciable capability monitoring of areas, as well as temporally zoning of in-field variability.

Table 4.

A non-exhaustive list (at March 2021) of commercialized platforms for monitoring cropping patterns at field levels.

PlatformHeadquarterOutputsCost
Data FarmingAustraliaTrack crop and pasture performance freely at 10-m resolution and at finer scales (3 m or < 1 m) for a low costAU$1.5 per hectare for 3-m resolution data; AU$10 per hectare for submeter imagery
Sata CropAustraliaMap fields and characterize what crops have been planted; inform growers where to spray and reduce risk of spray drift.Free
DAS—digital agriculture servicesAustraliaLand use maps for rural Australia, including at least 17 different crop species with tested accuracy of 75 %.Free access to the platform
Precision agricultureAustraliaMeasure and monitor the crop growth to define and quantify extent of yield constraints and therefore develop variable rate management strategies https://precisionagriculture.com.au/
One SoilBelarusMap 60 million agriculture fields across 43 European countries and USA using AI, ML, computer vision and data visualizationFree access to the platform
AgricamIsraelProvide tools including multiple growth layer maps, indices maps, graphs to assist fertilizer management, pest control, stress detection, irrigation management etc.https://www.agricam-ag.com/
Planet WatchersIsraelClassify crop types anywhere in the USA earlier than ever beforehttps://www.planetwatchers.com/
XARVIOGermanDeliver digital products including spray time recommendations, field zone specific variable application maps (based on a crop model platform)https://www.xarvio.com/
Crop ZoneGermanProvide integrated weed management servicehttps://crop.zone/
EOSIdentify problem areas, plan fieldwork tasks and provide real-time alerts, using satellite to monitor crop changes.0–750 per annual depending on the size of the fields. https://eos.com/
PlatformHeadquarterOutputsCost
Data FarmingAustraliaTrack crop and pasture performance freely at 10-m resolution and at finer scales (3 m or < 1 m) for a low costAU$1.5 per hectare for 3-m resolution data; AU$10 per hectare for submeter imagery
Sata CropAustraliaMap fields and characterize what crops have been planted; inform growers where to spray and reduce risk of spray drift.Free
DAS—digital agriculture servicesAustraliaLand use maps for rural Australia, including at least 17 different crop species with tested accuracy of 75 %.Free access to the platform
Precision agricultureAustraliaMeasure and monitor the crop growth to define and quantify extent of yield constraints and therefore develop variable rate management strategies https://precisionagriculture.com.au/
One SoilBelarusMap 60 million agriculture fields across 43 European countries and USA using AI, ML, computer vision and data visualizationFree access to the platform
AgricamIsraelProvide tools including multiple growth layer maps, indices maps, graphs to assist fertilizer management, pest control, stress detection, irrigation management etc.https://www.agricam-ag.com/
Planet WatchersIsraelClassify crop types anywhere in the USA earlier than ever beforehttps://www.planetwatchers.com/
XARVIOGermanDeliver digital products including spray time recommendations, field zone specific variable application maps (based on a crop model platform)https://www.xarvio.com/
Crop ZoneGermanProvide integrated weed management servicehttps://crop.zone/
EOSIdentify problem areas, plan fieldwork tasks and provide real-time alerts, using satellite to monitor crop changes.0–750 per annual depending on the size of the fields. https://eos.com/
Table 4.

A non-exhaustive list (at March 2021) of commercialized platforms for monitoring cropping patterns at field levels.

PlatformHeadquarterOutputsCost
Data FarmingAustraliaTrack crop and pasture performance freely at 10-m resolution and at finer scales (3 m or < 1 m) for a low costAU$1.5 per hectare for 3-m resolution data; AU$10 per hectare for submeter imagery
Sata CropAustraliaMap fields and characterize what crops have been planted; inform growers where to spray and reduce risk of spray drift.Free
DAS—digital agriculture servicesAustraliaLand use maps for rural Australia, including at least 17 different crop species with tested accuracy of 75 %.Free access to the platform
Precision agricultureAustraliaMeasure and monitor the crop growth to define and quantify extent of yield constraints and therefore develop variable rate management strategies https://precisionagriculture.com.au/
One SoilBelarusMap 60 million agriculture fields across 43 European countries and USA using AI, ML, computer vision and data visualizationFree access to the platform
AgricamIsraelProvide tools including multiple growth layer maps, indices maps, graphs to assist fertilizer management, pest control, stress detection, irrigation management etc.https://www.agricam-ag.com/
Planet WatchersIsraelClassify crop types anywhere in the USA earlier than ever beforehttps://www.planetwatchers.com/
XARVIOGermanDeliver digital products including spray time recommendations, field zone specific variable application maps (based on a crop model platform)https://www.xarvio.com/
Crop ZoneGermanProvide integrated weed management servicehttps://crop.zone/
EOSIdentify problem areas, plan fieldwork tasks and provide real-time alerts, using satellite to monitor crop changes.0–750 per annual depending on the size of the fields. https://eos.com/
PlatformHeadquarterOutputsCost
Data FarmingAustraliaTrack crop and pasture performance freely at 10-m resolution and at finer scales (3 m or < 1 m) for a low costAU$1.5 per hectare for 3-m resolution data; AU$10 per hectare for submeter imagery
Sata CropAustraliaMap fields and characterize what crops have been planted; inform growers where to spray and reduce risk of spray drift.Free
DAS—digital agriculture servicesAustraliaLand use maps for rural Australia, including at least 17 different crop species with tested accuracy of 75 %.Free access to the platform
Precision agricultureAustraliaMeasure and monitor the crop growth to define and quantify extent of yield constraints and therefore develop variable rate management strategies https://precisionagriculture.com.au/
One SoilBelarusMap 60 million agriculture fields across 43 European countries and USA using AI, ML, computer vision and data visualizationFree access to the platform
AgricamIsraelProvide tools including multiple growth layer maps, indices maps, graphs to assist fertilizer management, pest control, stress detection, irrigation management etc.https://www.agricam-ag.com/
Planet WatchersIsraelClassify crop types anywhere in the USA earlier than ever beforehttps://www.planetwatchers.com/
XARVIOGermanDeliver digital products including spray time recommendations, field zone specific variable application maps (based on a crop model platform)https://www.xarvio.com/
Crop ZoneGermanProvide integrated weed management servicehttps://crop.zone/
EOSIdentify problem areas, plan fieldwork tasks and provide real-time alerts, using satellite to monitor crop changes.0–750 per annual depending on the size of the fields. https://eos.com/

6. DETERMINING OF CROP PHENOLOGY

6.1 Crop phenology

The sensitive phenology stages of crops differ between crop types, and their timing vary across regions and seasons (Dreccer et al. 2018b). The development of plants has phases defined in terms of microscopic and macroscopic changes that have typically been integrated into phenological scales specific of crop species. For instance in wheat, main developmental stages include tillering, stem elongation, heading, flowering and maturity (Feekes 1941) and development scales from (Haun 1973) is commonly used to score leaf emergence on the main stem of wheat, while the development scale from Zadoks et al. (1974) describing the wheat lifecycle from germination through to ripening (Fig. 6). Each stage represents an important change in morphology and function of different plant organs. Accurate detection and prediction of these stages are essential for crop discrimination-oriented research. Crop phenological stages of cereals are mainly driven by thermal time with effects of vernalization and photoperiod, but are also affected by other environmental factors (Hyles et al. 2020) and vary between cultivars. The key phenological stages of importance to producers and industry are:

Zadok scale for winter cereals (GRDC 2005).
Figure 6.

Zadok scale for winter cereals (GRDC 2005).

  1. Management interventions

    • a. Weeds, especially early season (many herbicides are only registered for use at specific stages)

    • b. Fertilizer

    • c. Management of diseases/e.g. anticipation of fungal outbreaks

  2. Concerns about stress impacts

    • a. Establishment

    • b. Stress around flowering

    • c. Stress during grain development

6.2 Using RS

Earth observation systems are widely used to observe the morphological and physiological properties of crops with respect to their spectral, structural, biophysical or agronomic characteristics (Campbell 2002). However, EO systems require a near complete season of cloud-free satellite imagery to ensuring the capturing of all phenological stages and detect the transition between periods more accurately. In addition, reduced resolutions, both spatially and temporally, will limit the ability of EO platforms to determine subtle phenological differences between similar crops such as wheat, barley and oat. It is anticipated that combining crop modelling and RS will provide a potential solution to address this shortcoming.

6.3 Augmenting crop phenology estimates using crop modelling

Dynamic crop models like APSIM (Holzworth et al. 2014) are known for their ability to accurately estimate crop growth and development for a wide range of management practices and environmental conditions (Chenu et al. 2017). They are therefore tools that could be used to enhance the prediction of phenology from RS time series. While RS captures snapshots (5–14 days apart) of the crop growth cycle, one advantage of crop modelling is to provide phenology profiles at a daily time step. Dynamic crop models can be utilized as diagnostic tools by simulated morphological crop attributes such as leaf areas index (Waldner et al. 2019). Evidence suggests that soil water can influence flowering time in chickpea and wheat, but current simulation model rarely account for this effect (Chauhan et al. 2019), illustrating additional benefits of combining RS-based techniques with crop models.

Crop modelling is also useful to explore crop growth and development in multiple combinations of soil, climate, genotypes and management practices. While crop models typically focus on the field level, coupling RS and crop modelling will indirectly enable yield prediction across large scales. In this case, outputs from RS are used to regularly update simulated LAI from a crop model and thus adjust predicted yield in an iterative approach throughout the crop growth period (Lobell et al. 2015). Crop models allow predictions of yield and phenological development stages for both current and/or projected climate scenarios (Potgieter et al. 2006, 2016; Chapman 2008; Chenu et al. 2011, 2013). This enables forecasting of phenology ahead of the stage actually occurring (Xue et al. 2004) and enhances predictive capability. This interesting feature adds to RS approaches that, on their own, cannot project into the future (Heupel et al. 2018). In addition, dynamic crop models can be utilized to enhance our understanding and knowledge in adaptation of crops to different management and future climate scenarios (Hochman et al. 2009; Zheng et al. 2012b, 2016; Chenu et al. 2017; Watson et al. 2017) as well as analysis in linking traits like water use efficiency to genomic regions (i.e. phenotype to genotype and vice versa) (Chenu et al. 2009; Messina et al. 2018; Bustos-Korts et al. 2019). Combining RS and crop modelling thus opens doors to applications such as optimization of management practices at the field or subfield level for the rest of the season or assessing gene/trait value at regional scale.

Over the past decades, various dynamic crop models have been developed that simulate crop phenology. They primarily incorporate effects of the soil-plant-atmosphere relationship and mimic various physiological processes occurring at plant and canopy levels (Doraiswamy and Thompson 1982; Jones et al. 2003; Keating et al. 2003; Holzworth et al. 2014). Crop models are able to capture subtle differences between similar crops at field and regional scales. For example, in wheat, the CERES model was calibrated to accurately predict the anthesis date for three Italian durum wheat varieties (Creso: medium late, short; Duilio: early medium, tall; Simeto: early, short; Dettori et al. 2011). More recently, a gene-based version of the phenology model in APSIM predicted heading date of 210 lines across the Australian grain belt with a RMSE of 4.3 days (Zheng et al. 2013). In barley, APSIM predicted the phenology of 11 commercial cultivars in southern and western Australia with deviations between 1.4 to 7 days across genotypes (Liu et al. 2020). In sorghum, flowering date and physiological maturity for an early and a medium maturing cultivars were simulated with appreciably high accuracies using APSIM in smallholder farming systems in West Africa (Akinseye et al. 2020). In lupin and canola, predicting flowering dates in Western Australia had low root mean square errors (4–5, and 4.7 days, for the two crops, respectively) (Farre et al. 2002, 2004). Overall, it is clear from these well-documented (though not exhaustive) examples that crop models like APSIM, are robust enough to estimate crop phenology for various cultivars at point scale across a wide range of environments.

7. CHALLENGES IN THE APPLICATION OF RS IN AGRICULTURE

Overall, RS can be broadly summarized into sensor type and sensing platforms. Table 5 describes the pros and cons, as well as the applications for the senor types and sensor platforms commonly used with agriculture.

Table 5.

Pros, cons and applications of various sensor types and sensing platforms for collating crop type and crop phenology.

Sensor and platformMultispectral—satelliteHyperspectral—satelliteMulti-/hyperspectral and RGB (airborne)Multi-/hyperspectral and RGB (UAV)Thermal UAV/airborne/satelliteLIDAR UAV/airborne/satelliteGround-based RGB, spectral and thermal
Designed with a limited number of bands that are targeted at specific traitHighest spectral resolution with >1000 bandsMost commonly used for true colour imagery and can be used to create orthomosaic maps quicklyIdentifying water stress and detecting heat of canopies and surfacesMainly used to generate a detailed 3D model of the canopyFixed sensor or handheld
Pros• Common band settings from different satellite platforms
• Extensive coverage
• Multispectral resolution
• Affordable price
• Regular revisit
• Good historical data
• Large coverage
• Contains continuous wavelengths
• High spatial & spectral resolution allows
• Targeting a wide range of crop attributes
• High temporal, spatial and spectral resolutions
• Quasi-real-time data collection
• Changeable sensors
• Less affected by atmospheric conditions
• Targeting a wide range of morphological and physiological crop attributes
• High temporal, spatial and spectral resolutions
• Real-time data collection
• Changeable sensors
• Affordable price with RGB and certain multispectral sensors
• Versatility to record temperature and hot spot variables • Capable of penetrate canopy surfaces
• Provides 3D structure of plants
• Low costs
• Flexible connectivity via 4/5 G
• Inter of Things
• Installed to collate data at remote areas
• High resolution close to object
Cons • Low spatial resolution
• Clouds may obscure ground features
• Cost increase with the increase of spatial resolution
• Images may not be collected at critical times
• Low spatial resolution
• Difficulties in accurate atmospheric corrections across the spectrum
• Complexity in data processing
• High costs
• Limited coverage
• Limited by favourable weather conditions
• Limited availability of sensors and operators
• Difficulty of organizing an airborne campaign at short notice
• High cost with multi-/hyperspectral sensors
• Limited coverage
• Limited by favourable weather conditions
• Unstable platform can lead to blurred images
• Costly
• Low resolution
• Costly
• Does not provide visual image so difficult to interpret
• Surveys can be time consuming
• Surveys can be labour intensity
• Data from single point or image of canopy
Applications• Large-scale crop-type classification
• Crop growth monitoring and production prediction
• Regional crop classification
• Crop stress detection
• Regional scale crop classification
• crop stress detection
• Identifying within-field variations for PA management
• Crop mapping
• Crop stress detection
• Crop physiology studies
• Within-field variation precision agriculture
• Monitoring heat
• Identifying plant growth stresses
• Biomass/yield estimation
• Plant height
• Canopy structure analysis
• Ground truth data collection
• Identifying reflectance
• Growth attributes of single leaf/plant/region
Sensor and platformMultispectral—satelliteHyperspectral—satelliteMulti-/hyperspectral and RGB (airborne)Multi-/hyperspectral and RGB (UAV)Thermal UAV/airborne/satelliteLIDAR UAV/airborne/satelliteGround-based RGB, spectral and thermal
Designed with a limited number of bands that are targeted at specific traitHighest spectral resolution with >1000 bandsMost commonly used for true colour imagery and can be used to create orthomosaic maps quicklyIdentifying water stress and detecting heat of canopies and surfacesMainly used to generate a detailed 3D model of the canopyFixed sensor or handheld
Pros• Common band settings from different satellite platforms
• Extensive coverage
• Multispectral resolution
• Affordable price
• Regular revisit
• Good historical data
• Large coverage
• Contains continuous wavelengths
• High spatial & spectral resolution allows
• Targeting a wide range of crop attributes
• High temporal, spatial and spectral resolutions
• Quasi-real-time data collection
• Changeable sensors
• Less affected by atmospheric conditions
• Targeting a wide range of morphological and physiological crop attributes
• High temporal, spatial and spectral resolutions
• Real-time data collection
• Changeable sensors
• Affordable price with RGB and certain multispectral sensors
• Versatility to record temperature and hot spot variables • Capable of penetrate canopy surfaces
• Provides 3D structure of plants
• Low costs
• Flexible connectivity via 4/5 G
• Inter of Things
• Installed to collate data at remote areas
• High resolution close to object
Cons • Low spatial resolution
• Clouds may obscure ground features
• Cost increase with the increase of spatial resolution
• Images may not be collected at critical times
• Low spatial resolution
• Difficulties in accurate atmospheric corrections across the spectrum
• Complexity in data processing
• High costs
• Limited coverage
• Limited by favourable weather conditions
• Limited availability of sensors and operators
• Difficulty of organizing an airborne campaign at short notice
• High cost with multi-/hyperspectral sensors
• Limited coverage
• Limited by favourable weather conditions
• Unstable platform can lead to blurred images
• Costly
• Low resolution
• Costly
• Does not provide visual image so difficult to interpret
• Surveys can be time consuming
• Surveys can be labour intensity
• Data from single point or image of canopy
Applications• Large-scale crop-type classification
• Crop growth monitoring and production prediction
• Regional crop classification
• Crop stress detection
• Regional scale crop classification
• crop stress detection
• Identifying within-field variations for PA management
• Crop mapping
• Crop stress detection
• Crop physiology studies
• Within-field variation precision agriculture
• Monitoring heat
• Identifying plant growth stresses
• Biomass/yield estimation
• Plant height
• Canopy structure analysis
• Ground truth data collection
• Identifying reflectance
• Growth attributes of single leaf/plant/region
Table 5.

Pros, cons and applications of various sensor types and sensing platforms for collating crop type and crop phenology.

Sensor and platformMultispectral—satelliteHyperspectral—satelliteMulti-/hyperspectral and RGB (airborne)Multi-/hyperspectral and RGB (UAV)Thermal UAV/airborne/satelliteLIDAR UAV/airborne/satelliteGround-based RGB, spectral and thermal
Designed with a limited number of bands that are targeted at specific traitHighest spectral resolution with >1000 bandsMost commonly used for true colour imagery and can be used to create orthomosaic maps quicklyIdentifying water stress and detecting heat of canopies and surfacesMainly used to generate a detailed 3D model of the canopyFixed sensor or handheld
Pros• Common band settings from different satellite platforms
• Extensive coverage
• Multispectral resolution
• Affordable price
• Regular revisit
• Good historical data
• Large coverage
• Contains continuous wavelengths
• High spatial & spectral resolution allows
• Targeting a wide range of crop attributes
• High temporal, spatial and spectral resolutions
• Quasi-real-time data collection
• Changeable sensors
• Less affected by atmospheric conditions
• Targeting a wide range of morphological and physiological crop attributes
• High temporal, spatial and spectral resolutions
• Real-time data collection
• Changeable sensors
• Affordable price with RGB and certain multispectral sensors
• Versatility to record temperature and hot spot variables • Capable of penetrate canopy surfaces
• Provides 3D structure of plants
• Low costs
• Flexible connectivity via 4/5 G
• Inter of Things
• Installed to collate data at remote areas
• High resolution close to object
Cons • Low spatial resolution
• Clouds may obscure ground features
• Cost increase with the increase of spatial resolution
• Images may not be collected at critical times
• Low spatial resolution
• Difficulties in accurate atmospheric corrections across the spectrum
• Complexity in data processing
• High costs
• Limited coverage
• Limited by favourable weather conditions
• Limited availability of sensors and operators
• Difficulty of organizing an airborne campaign at short notice
• High cost with multi-/hyperspectral sensors
• Limited coverage
• Limited by favourable weather conditions
• Unstable platform can lead to blurred images
• Costly
• Low resolution
• Costly
• Does not provide visual image so difficult to interpret
• Surveys can be time consuming
• Surveys can be labour intensity
• Data from single point or image of canopy
Applications• Large-scale crop-type classification
• Crop growth monitoring and production prediction
• Regional crop classification
• Crop stress detection
• Regional scale crop classification
• crop stress detection
• Identifying within-field variations for PA management
• Crop mapping
• Crop stress detection
• Crop physiology studies
• Within-field variation precision agriculture
• Monitoring heat
• Identifying plant growth stresses
• Biomass/yield estimation
• Plant height
• Canopy structure analysis
• Ground truth data collection
• Identifying reflectance
• Growth attributes of single leaf/plant/region
Sensor and platformMultispectral—satelliteHyperspectral—satelliteMulti-/hyperspectral and RGB (airborne)Multi-/hyperspectral and RGB (UAV)Thermal UAV/airborne/satelliteLIDAR UAV/airborne/satelliteGround-based RGB, spectral and thermal
Designed with a limited number of bands that are targeted at specific traitHighest spectral resolution with >1000 bandsMost commonly used for true colour imagery and can be used to create orthomosaic maps quicklyIdentifying water stress and detecting heat of canopies and surfacesMainly used to generate a detailed 3D model of the canopyFixed sensor or handheld
Pros• Common band settings from different satellite platforms
• Extensive coverage
• Multispectral resolution
• Affordable price
• Regular revisit
• Good historical data
• Large coverage
• Contains continuous wavelengths
• High spatial & spectral resolution allows
• Targeting a wide range of crop attributes
• High temporal, spatial and spectral resolutions
• Quasi-real-time data collection
• Changeable sensors
• Less affected by atmospheric conditions
• Targeting a wide range of morphological and physiological crop attributes
• High temporal, spatial and spectral resolutions
• Real-time data collection
• Changeable sensors
• Affordable price with RGB and certain multispectral sensors
• Versatility to record temperature and hot spot variables • Capable of penetrate canopy surfaces
• Provides 3D structure of plants
• Low costs
• Flexible connectivity via 4/5 G
• Inter of Things
• Installed to collate data at remote areas
• High resolution close to object
Cons • Low spatial resolution
• Clouds may obscure ground features
• Cost increase with the increase of spatial resolution
• Images may not be collected at critical times
• Low spatial resolution
• Difficulties in accurate atmospheric corrections across the spectrum
• Complexity in data processing
• High costs
• Limited coverage
• Limited by favourable weather conditions
• Limited availability of sensors and operators
• Difficulty of organizing an airborne campaign at short notice
• High cost with multi-/hyperspectral sensors
• Limited coverage
• Limited by favourable weather conditions
• Unstable platform can lead to blurred images
• Costly
• Low resolution
• Costly
• Does not provide visual image so difficult to interpret
• Surveys can be time consuming
• Surveys can be labour intensity
• Data from single point or image of canopy
Applications• Large-scale crop-type classification
• Crop growth monitoring and production prediction
• Regional crop classification
• Crop stress detection
• Regional scale crop classification
• crop stress detection
• Identifying within-field variations for PA management
• Crop mapping
• Crop stress detection
• Crop physiology studies
• Within-field variation precision agriculture
• Monitoring heat
• Identifying plant growth stresses
• Biomass/yield estimation
• Plant height
• Canopy structure analysis
• Ground truth data collection
• Identifying reflectance
• Growth attributes of single leaf/plant/region

Although attributes of EO have dramatically improved, some challenges remain. These relate in particular to the discrimination between crop types and prediction of crop phenology. Overall, the ability to improve both crop type and phenology estimates are mainly limited the variability in G × E × M within and between fields, which can result in mixed reflectance within a short distance on the ground. This adds additional complexities, which might be challenging for traditional approaches to solve.

Some challenges that exist and need further research and development for operational implementation of current systems are highlighted below:

  • - The large amount EO of data that currently exist is insufficient to address decisions at field and within-field scales. These data were mainly derived using less frequent temporal and lower spatial resolution RS platforms (e.g. Landsat and MODIS), and this has limited the industry’s ability to extract accurate information on crop type and phenological signatures at pixel and field scales

  • - The lack of state-of-art software frameworks that can manipulate and compute large volumes of data within short turn around periods for near-real-time implementation.

  • - The lack of easily accessible high-performance computing (HPC) clouds that have robust cyber designs and reliable agility in managing and analysing the progressive and rapid growth in digital technologies.

Currently, various platforms in particular cloud-based platforms exist commercially, i.e. Amazon, Google earth engine (GEE), earth observation system (EOS), at a national level (NCRIS) and at an institution level (e.g. high-performance computing (HPC) system at the University of Queensland). These types of platforms are needed to enhance automated delivery in near-real-time mode of most proposed applications.

Finally, it is anticipated that crop growth models could augment the prediction of phenology from RS given its capacity to simulate crop growth response to G × E × M interactions. However, this can only be achieved by exploring new frontiers in application and/or developing novel metrics that can effectively harness all dimensions of the targeted data available to achieve higher accuracies in particular in crop phenology or crop-type estimation.

8. EXPLORING NEW FRONTIERS TO ADDRESS CROP SPECIES AND PHENOLOGY DECISIONS AT THE SUB-PADDOCK LEVEL

Up to now, no significant effort has been made to successfully address the accurate prediction of crop phenology stages within a field across farms and regions. In order to achieve this, the Grains Research and Development Corporation (GRDC) has recently earmarked large investment into the development and application of enabling technologies in agriculture in Australia. To successfully implement the latest advances in EO technologies and data analytic algorithms for next generation crop monitoring and crop discrimination applications, UQ, in collaboration with various other national partners (research and industry), proposed an integrated approach that will utilize the full range of digital technologies, i.e. soil, weather, management, crop modelling and various sensing (fixed sensors, UAV, aircraft) and EO platforms (Fig. 7). This project will enable the analysis, development, validation and application of prediction pipelines to determine crop phenology and crop type with detail at a large plot level (60 m × 60 m) accounting for G × E × M interactions. Briefly, development work at detailed sites will firstly allow the development of digital technologies through fusion using ML approaches of the various data layers (temporally, spatially and spectrally). Secondly, this will enable the validation and out-scaling of such technologies using high-resolution RS across large regional scales. The project will finally deliver an industry-ready diagnostic tool for future on-farm application across Australia and beyond.

Proposed set-up of a multiple digital sensor platform to record information at a detailed crop validation site in recently funded GRDC project (CropPhen). This targets the development of new approaches for the accurate monitoring of crop phenology and discriminating of crop types (copyright QAAFI).
Figure 7.

Proposed set-up of a multiple digital sensor platform to record information at a detailed crop validation site in recently funded GRDC project (CropPhen). This targets the development of new approaches for the accurate monitoring of crop phenology and discriminating of crop types (copyright QAAFI).

The spatial and temporal quantification of crop species and developmental stage using remotely sensed imagery could provide an enabling data layer to better understand G × E × M interactions on a larger scale to deliver improved products and services to growers that enable more profitable decision-making. For instance, existing knowledge on crop susceptibility to pests, diseases and abiotic constraints at different development stages could be leveraged against highly frequent and spatially referenced measures of crop developmental stages at the field and subfield levels. Crop phenological characteristics such as the timing of emergence and crop growth stage (e.g. flowering) together with crop modelling can support variety and sowing time decisions, crop management strategies and the management of production risk well in advance of harvest.

9. IMPLICATIONS TO INDUSTRY

It is evident that digital technologies as discussed here will help shape and advance our ability to make informed decisions, faster, with greater accuracy, and at higher spatial extent than was previously possible. This will need targeted research and development, in particular in regard to data software pipelines, cloud computing and quantitative metrics that will aid producers to better manage their on-farm risks as a result of G × E × M interactions. In agriculture, it is anticipated that utility and application of predictive tools derived from fusion of different data sources utilizing state-of-art ML algorithms will progressively increase. For example, mapping the spatial distribution of estimated sowing dates and/or flowering dates at a farm level will aid producers, in particular by improving the application of pest, disease- and nutrition-based decision-support tools—as well as yield forecasting models. This will result in

  • (i) Optimizing input costs,

  • (ii) Enhancing farm management practices,

  • (iii) Improving whole-farm profitability,

  • (iv) Enhancing ability of crop insurance products, thus leading to a reducing in crop risk and market volatility.

The application of digital technologies, derived from EO, in silico and AI, has the aptitude to be transformational and will likely lead to new robust cutting-edge approaches that will further genetic gains, increase profitability and therefore enhance the resilience and improving food production.

ACKNOWLEDGMENTS

A special thanks to Dr Xuemin Wang for creating of Figure 1 from his research data.

SOURCES OF FUNDING

This work was funded by the grain research Development Corporation Australia (GRDC), ‘CropPhen’ project (UOQ2002-010RTX), the Queensland University (UQ), University of Melbourne (UoM), Data Farming Pty Ltd, Primary Industries and Regions of South Australia (SARDI) and Department of Primary Industries and Regional Development (DPIRD). The CropPhen project currently underway with the main aim to increase the accuracies (spatially and temporally) to predict crop type and crop phenology. This approach is framed around the set-up outlined in Fig. 7.

CONTRIBUTIONS BY THE AUTHORS

A.P. was the project lead and lead investigator of the research project (CropPhen : UOQ2002-010RTX). A.P. and Y. Z. conceived and designed the literature research outlines. They did all literature reviews and wrote the first few versions of the manuscript. P. Z., K. C. Y. Z. K. P., B. B., Y. D., T. N., F. R. and S. C. contributed to the sections related to their research backgrounds respectively. All authors reviewed and edited the manuscript.

DISCLAIMER

To the authors best knowledge this publication has been prepared in good faith based on information available at the date of publication.

LITERATURE CITED

Ababaei
B
,
Chenu
K
.
2020
.
Heat shocks increasingly impede grain filling but have little effect on grain setting across the Australian wheatbelt
.
Agricultural and Forest Meteorology
107889
:
284
.

Ahmed
OS
,
Shemrock
A
,
Chabot
D
,
Dillon
C
,
Williams
G
,
Wasson
R
,
Franklin
SE
.
2017
.
Hierarchical land cover and vegetation classification using multispectral data acquired from an unmanned aerial vehicle
.
International Journal of Remote Sensing
38
:
2037
2052
.

Akinseye
FM
,
Ajeigbe
HA
,
Traore
PCS
,
Agele
SO
,
Zemadim
B
,
Whitbread
A
.
2020
.
Improving sorghum productivity under changing climatic conditions: a modelling approach
.
Field Crops Research
246
:
107685
.

Alexandratos
N
,
Bruinsma
J
.
2012
.
World agriculture towards 2030/2050: the 2012 revision
. ESA Working Paper No. 12-03.
Rome
:
FAO
.

Allan
RJ
.
2000
.
El Niño and the Southern Oscillation: multiscale variability and its impacts on natural ecosystems and society.
In:
Diaz
HF
,
Markgraf
V
, eds.
ENSO and climatic variability in the last 150 years.
Cambridge, UK
:
Cambridge University Press
,
3
55
.

Ban
YF
.
2003
.
Synergy of multitemporal ERS-1 SAR and Landsat TM data for classification of agricultural crops
.
Canadian Journal of Remote Sensing
29
:
518
526
.

Bargiel
D
.
2017
.
A new method for crop classification combining time series of radar images and crop phenology information
.
Remote Sensing of Environment
198
:
369
383
.

Belgiu
M
,
Csillik
O
.
2018
.
Sentinel-2 cropland mapping using pixel-based and object-based time-weighted dynamic time warping analysis
.
Remote Sensing of Environment
204
:
509
523
.

Benediktsson
JA
,
Swain
PH
,
Ersoy
OK
.
1990
.
Neural network approaches versus statistical methods in classification of multisource remote sensing data
.
IEEE Transactions on Geoscience and Remote Sensing
28
:
540
552
.

Bohler
JE
,
Schaepman
ME
,
Kneubuhler
M
.
2018
.
Crop classification in a heterogeneous arable landscape using uncalibrated UAV data
.
Remote Sensing
10
:
1282
.

Breiman
L
.
2001
.
Random forests
.
Machine Learning
45
:
5
32
.

Bustos-Korts
D
,
Boer
MP
,
Malosetti
M
,
Chapman
S
,
Chenu
K
,
Zheng
B
,
van Eeuwijk
FA
.
2019
.
Combining crop growth modeling and statistical genetic modeling to evaluate phenotyping strategies. Frontiers in Plant Science
10
:
1491
.

Cai
Y
,
Guan
K
,
Lobell
D
,
Potgieter
AB
,
Wang
S
,
Peng
J
,
Xu
T
,
Asseng
S
,
Zhang
Y
,
You
L
,
Peng
B
.
2019
.
Integrating satellite and climate data to predict wheat yield in Australia using machine learning approaches
.
Agricultural and Forest Meteorology
274
:
144
159
.

Campbell
JB
.
2002
. Introduction to remote sensing
, 3rd edn.
New York
:
Guilford Press
.

Castillejo-González
IL
,
López-Granados
F
,
García-Ferrer
A
,
Peña-Barragán
JM
,
Jurado-Expósito
M
,
de la Orden
MS
,
González-Audicana
M
.
2009
.
Object- and pixel-based analysis for mapping crops and their agro-environmental associated measures using QuickBird imagery
.
Computers and Electronics in Agriculture
68
:
207
215
.

Chapman
S
.
2008
.
Use of crop models to understand genotype by environment interactions for drought in real-world and simulated plant breeding trials
.
International Journal of Plant Breeding
161
:
195
208
.

Chapman
CS
,
Merz
T
,
Chan
A
,
Jackway
P
,
Hrabar
S
,
Dreccer
FM
,
Holland
E
,
Zheng
B
,
Ling
JT
,
Jimenez-Berni
J
.
2014
.
Pheno-copter: a low-altitude, autonomous remote-sensing robotic helicopter for high-throughput field-based phenotyping
.
Agronomy
4
:
279
301
.

Chapman
SC
,
Zheng
B
,
Potgieter
A
,
Guo
W
,
Baret
F
,
Liu
S
,
Madec
S
,
Solan
B
,
George-Jaeggli
B
,
Hammer
GL
,
Jordan
DR
.
2018
.
Visible, near infrared, and thermal spectral radiance on-board UAVs for high-throughput phenotyping of plant breeding trials
. In:
Thenkabail
PS
,
Lyon
GJ
,
Huete
A
, eds.
Hyperspectral remote sensing of vegetation (second edition, 4 volume set)
.
Boca Raton, London, New York
:
CRC Press-Taylor and Francis Group
,
273
297
.

Chauhan
YS
,
Ryan
M
,
Chandra
S
,
Sadras
VO
.
2019
.
Accounting for soil moisture improves prediction of flowering time in chickpea and wheat
.
Scientific Reports
9
:
7510
.

Chen
J
,
Jönsson
P
,
Tamura
M
,
Gu
Z
,
Matsushita
B
,
Eklundh
L
.
2004
.
A simple method for reconstructing a high-quality NDVI time-series data set based on the Savitzky–Golay filter
.
Remote Sensing of Environment
91
:
332
344
.

Chenu
K
,
Chapman
SC
,
Tardieu
F
,
McLean
G
,
Welcker
C
,
Hammer
GL
.
2009
.
Simulating the yield impacts of organ-level quantitative trait loci associated with drought response in maize: a “gene-to-phenotype” modeling approach
.
Genetics
183
:
1507
1523
.

Chenu
K
,
Cooper
M
,
Hammer
GL
,
Mathews
KL
,
Dreccer
MF
,
Chapman
SC
.
2011
.
Environment characterization as an aid to wheat improvement: interpreting genotype-environment interactions by modelling water-deficit patterns in North-Eastern Australia
.
Journal of Experimental Botany
62
:
1743
1755
.

Chenu
K
,
Deihimfard
R
,
Chapman
SC
.
2013
.
Large-scale characterization of drought pattern: a continent-wide modelling approach applied to the Australian wheatbelt–spatial and temporal trends
.
The New Phytologist
198
:
801
820
.

Chenu
K
,
Porter
JR
,
Martre
P
,
Basso
B
,
Chapman
SC
,
Ewert
F
,
Bindi
M
,
Asseng
S
.
2017
.
Contribution of crop models to adaptation in wheat
.
Trends in Plant Science
22
:
472
490
.

Collins
B
,
Chenu
K
.
2021
.
Improving productivity of Australian wheat by adapting sowing date and genotype phenology to future climate
.
Climate Risk Management
32
:
100300
.

Collins
D
,
Della-Marta
P
,
Plummer
N
,
Trewin
B
.
2000
.
Trends in annual frequencies of extreme temperature events in Australia
.
Australian Meteorological Magazine
49
:
277
292
.

Crimp
SJ
,
Gobbett
D
,
Kokic
P
,
Nidumolu
U
,
Howden
M
,
Nicholls
N
.
2016
.
Recent seasonal and long-term changes in southern Australian frost occurrence
.
Climatic Change
139
:
115
128
.

Dash
J
,
Ogutu
BO
.
2016
.
Recent advances in space-borne optical remote sensing systems for monitoring global terrestrial ecosystems
.
Progress in Physical Geography: Earth and Environment
40
:
322
351
.

Davidson
AM
,
Fisette
T
,
Mcnairn
H
,
Daneshfar
B
.
2017
.
Detailed crop mapping using remote sensing data (crop data layers)
. In:
Delincé
J
, ed.
Handbook on remote sensing for agricultural statistics (chapter 4).
Rome
:
Handbook of the Global Strategy to improve Agricultural and Rural Statistics (GSARS)
.

Defourny
P
,
Bontemps
S
,
Bellemans
N
,
Cara
C
,
Dedieu
G
,
Guzzonato
E
,
Hagolle
O
,
Inglada
J
,
Nicola
L
,
Rabaute
T
,
Savinaud
M
,
Udroiu
C
,
Valero
S
,
Bégué
A
,
Dejoux
J
,
El Harti
A
,
Ezzahar
J
,
Kussul
N
,
Labbassi
K
,
Lebourgeois
V
,
Miao
Z
,
Newby
T
,
Nyamugama
A
,
Salh
N
,
Shelestov
A
,
Simonneaux
V
,
Traore
P
,
Traore
S
,
Koetz
B
.
2019
.
Near real-time agriculture monitoring at national scale at parcel resolution: Performance assessment of the Sen2-Agri automated system in various cropping systems around the world
.
Remote Sensing of Environment
221
:
551
568
.

Delincé
,
J
.
2017a
.
Technical report on the Master Sampling Frame: the field experiments conducted in Nepal
. Technical Report no. 30.
Global Strategy to improve Agricultural and Rural Statistics (GSARS) Technical Report
:
FAO Rome
.

Delincé
J
.
2017b
.
Recent practices and advances for AMIS crop yield forecasting at farm/parcel level: a review.
Roma
:
FAO–AMIS Publication
.

Dettori
M
,
Cesaraccio
C
,
Motroni
A
,
Spano
D
,
Duce
P
.
2011
.
Using CERES-Wheat to simulate durum wheat production and phenology in Southern Sardinia, Italy
.
Field Crops Research
120
:
179
188
.

Doraiswamy
PC
,
Thompson
DR
.
1982
.
A crop moisture stress index for large areas and its application in the prediction of spring wheat phenology
.
Agricultural Meteorology
27
:
1
15
.

Dreccer
MF
,
Fainges
J
,
Whish
J
,
Ogbonnaya
FC
,
Sadras
VO
.
2018a
.
Comparison of sensitive stages of wheat, barley, canola, chickpea and field pea to temperature and water stress across Australia
.
Agricultural and Forest Meteorology
248
:
275
294
.

Dreccer
MF
,
Fainges
J
,
Whish
J
,
Ogbonnaya
FC
,
Sadras
VO
.
2018b
.
Comparison of sensitive stages of wheat, barley, canola, chickpea and field pea to temperature and water stress across Australia
.
Agricultural and Forest Meteorology
248
:
275
294
.

Erbek
FS
,
Özkan
C
,
Taberner
M
.
2004
.
Comparison of maximum likelihood classification method with supervised artificial neural network algorithms for land use activities
.
International Journal of Remote Sensing
25
:
1733
1748
.

Farre
I
,
Robertson
MJ
,
Asseng
S
,
French
RJ
,
Dracup
M
.
2004
.
Simulating lupin development, growth, and yield in a Mediterranean environment
.
Australian Journal of Agricultural Research
55
:
863
877
.

Farre
I
,
Robertson
MJ
,
Walton
GH
,
Asseng
S
.
2002
.
Simulating phenology and yield response of canola to sowing date in Western Australia using the APSIM model
.
Australian Journal of Agricultural Research
53
:
1155
1164
.

Feekes
W
.
1941
.
De tarwe en haar milieu. Vers XVII Tech. Tarwe. Comm.
Groningen: Hoitsema
.

Feng
G
,
Masek
J
,
Schwaller
M
,
Hall
F
.
2006
.
On the blending of the Landsat and MODIS surface reflectance: predicting daily Landsat surface reflectance
.
IEEE Transactions on Geoscience and Remote Sensing
44
:
2207
2218
.

Fischer
RA
.
2009
.
Farming systems of Australia: exploiting the synergy between genetic improvement and agronomy
. In:
Calderini
D
,
Sadras
VO
, eds.
Crop physiology: applications for genetic improvement and agronomy
.
San Diego, CA
:
Academic Press
,
23
54
.

Fischer
RA
,
Byerlee
D
,
Edmeades
GO
.
2014
.
Crop yields and global food security: will yield increase continue to feed the world?
Canberra, Australia
:
Australian Centre for International Agricultural Research
.

Flohr
BM
,
Hunt
JR
,
Kirkegaard
JA
,
Evans
JR
.
2017
.
Water and temperature stress define the optimal flowering period for wheat in south-eastern Australia
.
Field Crops Research
209
:
108
119
.

Foley
JA
,
Ramankutty
N
,
Brauman
KA
,
Cassidy
ES
,
Gerber
JS
,
Johnston
M
,
Mueller
ND
,
O’Connell
C
,
Ray
DK
,
West
PC
,
Balzer
C
,
Bennett
EM
,
Carpenter
SR
,
Hill
J
,
Monfreda
C
,
Polasky
S
,
Rockström
J
,
Sheehan
J
,
Siebert
S
,
Tilman
D
,
Zaks
DP
.
2011
.
Solutions for a cultivated planet
.
Nature
478
:
337
342
.

Foody
GM
,
Campbell
NA
,
Trodd
NM
,
Wood
TF
.
1992
.
Derivation and application of probabilistic measures of class membership from the maximum likelihood classification
.
Photogrammetric Engineering and Remote Sensing
58
:
1335
1341
.

Foody
GM
,
Mathur
A
.
2004
.
A relative evaluation of multiclass image classification by support vector machines
.
IEEE Transactions on Geoscience and Remote Sensing
42
:
1335
1343
.

Gamon
JA
,
Somers
B
,
Malenovský
Z
,
Middleton
EM
,
Rascher
U
,
Schaepman
ME
.
2019
.
Assessing vegetation function with imaging spectroscopy
.
Surveys in Geophysics
40
:
489
513
.

Ghazaryan
G
,
Dubovyk
O
,
Löw
F
,
Lavreniuk
M
,
Kolotii
A
,
Schellberg
J
,
Kussul
N
.
2018
.
A rule-based approach for crop identification using multi-temporal and multi-sensor phenological metrics
.
European Journal of Remote Sensing
51
:
511
524
.

GRDC
.
2005
.
Cereal Growth Stages in The link to crop management
.
The Grains Research and Development Corporation
. September
2005
. ISBN 1-875477-40-3.

Guo
W
,
Zheng
B
,
Potgieter
AB
,
Diot
J
,
Watanabe
K
,
Noshita
K
,
Jordan
DR
,
Wang
X
,
Watson
J
,
Ninomiya
S
,
Chapman
SC
.
2018
.
Aerial imagery analysis – quantifying appearance and number of sorghum heads for applications in breeding and agronomy
.
Frontiers in Plant Science
9
:
1544
.

Hatt
M
,
Heyhoe
E
,
Whittle
L
.
2012
.
Options for insuring Australian agriculture, ABARES report to client prepared for Climate Division.
Canberra, Australia
:
Department of Agriculture, Fisheries and Forestry
,
34
.

Haun
J
.
1973
.
Visual quantification of wheat development 1
.
Agronomy Journal
65
:
116
119
.

Heupel
K
,
Spengler
D
,
Itzerott
S
.
2018
.
A progressive crop-type classification using multitemporal remote sensing data and phenological information
.
Journal of Photogrammetry, Remote Sensing and Geoinformation Science
86
:
53
69
.

Hochman
Z
,
van Rees
H
,
Carberry
PS
,
Hunt
JR
,
McCown
RL
,
Gartmann
A
,
Holzworth
D
,
van Rees
S
,
Dalgliesh
NP
,
Long
W
,
Peake
AS
,
Poulton
PL
,
McClelland
T
.
2009
.
Re-inventing model-based decision support with Australian dryland farmers. 4. Yield Prophet<sup/> helps farmers monitor and manage crops in a variable climate
.
Crop and Pasture Science
60
:
1057
1070
.

Holzworth
DP
,
Huth
NI
,
deVoil
PG
,
Zurcher
EJ
,
Herrmann
NI
,
McLean
G
,
Chenu
K
,
van Oosterom
EJ
,
Snow
V
,
Murphy
C
,
Moore
AD
,
Brown
H
,
Whish
JPM
,
Verrall
S
,
Fainges
J
,
Bell
LW
,
Peake
AS
,
Poulton
PL
,
Hochman
Z
,
Thorburn
PJ
,
Gaydon
DS
,
Dalgliesh
NP
,
Rodriguez
D
,
Cox
H
,
Chapman
S
,
Doherty
A
,
Teixeira
E
,
Sharp
J
,
Cichota
R
,
Vogeler
I
,
Li
FY
,
Wang
E
,
Hammer
GL
,
Robertson
MJ
,
Dimes
JP
,
Whitbread
AM
,
Hunt
J
,
van Rees
H
,
McClelland
T
,
Carberry
PS
,
Hargreaves
JNG
,
MacLeod
N
,
McDonald
C
,
Harsdorf
J
,
Wedgwood
S
,
Keating
BA
.
2014
.
APSIM – evolution towards a new generation of agricultural systems simulation
.
Environmental Modelling & Software
62
:
327
350
.

Hu
W
,
Huang
Y
,
Wei
L
,
Zhang
F
,
Li
H
.
2015
.
Deep convolutional neural networks for hyperspectral image classification
.
Journal of Sensors
2015
:
258619
.

Hunt
J
,
Kirkegaard
J
,
Celestina
C
,
Porker
K
.
2019
.
Transformational agronomy: restoring the role of agronomy in modern agricultural research
. In: Pratley J, Kirkegaard J, eds.
Australian agriculture in 2020: from conservation to automation
.
Wagga Wagga, Australia: Australian Society for Agronomy
,
373
388
.

Hyles
J
,
Bloomfield
MT
,
Hunt
JR
,
Trethowan
RM
,
Trevaskis
B
.
2020
.
Phenology and related traits for wheat adaptation
.
Heredity
125
:
417
430
.

Igor
I
.
2018
.
Digital technologies in agriculture: adoption, value added and overview.
https://medium.com/remote-sensing-in-agriculture/about (accessed at 7/6/2021).

Immitzer
M
,
Vuolo
F
,
Atzberger
C
.
2016
.
First experience with Sentinel-2 data for crop and tree species classifications in central Europe
.
Remote Sensing
8
:
166
.

IPCC
.
2019
.
Chapter 5: food security
. In
R.K.P.a.L.A.M.e. Core Writing Team
, ed.
Climate change and land: an IPCC special report on climate change, desertification, land degradation, sustainable land management, food security, and greenhouse gas fluxes in terrestrial ecosystems.
Geneva, Switzerland
:
IPCC
,
151
.

Jain
M
,
Srivastava
AK
,
Singh
B
,
Joon
RK
,
McDonald
A
,
Royal
K
,
Lisaius
MC
,
Lobell
DB
.
2016
.
Mapping smallholder wheat yields and sowing dates using micro-satellite data
.
Remote Sensing
8
:
860
.

Jenerowicz
A
,
Woroszkiewicz
M
.
2016
.
The pan-sharpening of satellite and UAV imagery for agricultural applications.
Edinburgh, United Kingdom: SPIE Remote Sensing
.

Ji
S
,
Zhang
C
,
Xu
A
,
Shi
Y
,
Duan
Y
.
2018
.
3D convolutional neural networks for crop classification with multi-temporal remote sensing images.
Remote Sensing
10
:
75
.

Jones
JW
,
Hoogenboom
G
,
Porter
CH
,
Boote
KJ
,
Batchelor
WD
,
Hunt
LA
,
Wilkens
PW
,
Singh
U
,
Gijsman
AJ
,
Ritchie
JT
.
2003
.
The DSSAT cropping system model
.
European Journal of Agronomy
18
:
235
265
.

Keating
BA
,
Carberry
PS
,
Hammer
GL
,
Probert
ME
,
Robertson
MJ
,
Holzworth
D
,
Huth
NI
,
Hargreaves
JNG
,
Meinke
H
,
Hochman
Z
,
McLean
G
,
Verburg
K
,
Snow
V
,
Dimes
JP
,
Silburn
M
,
Wang
E
,
Brown
S
,
Bristow
KL
,
Asseng
S
,
Chapman
S
,
McCown
RL
,
Freebairn
DM
,
Smith
CJ
.
2003
.
An overview of APSIM, a model designed for farming systems simulation
.
European Journal of Agronomy
18
:
267
288
.

Kussul
N
,
Lavreniuk
M
,
Skakun
S
,
Shelestov
A
.
2017
.
Deep learning classification of land cover and crop types using remote sensing data
.
IEEE Geoscience and Remote Sensing Letters
14
:
778
782
.

Kwak
GH
,
Park
NW
.
2019
.
Impact of texture information on crop classification with machine learning and UAV images
.
Applied Sciences-Basel
9
:
643
.

Langley
SK
,
Cheshire
HM
,
Humes
KS
.
2001
.
A comparison of single date and multitemporal satellite image classifications in a semi-arid grassland
.
Journal of Arid Environments
49
:
401
411
.

Latif
MA
.
2019
.
Multi-crop recognition using UAV-based high-resolution NDVI time-series
.
Journal of Unmanned Vehicle Systems
7
:
207
218
.

Li
QT
,
Wang
CZ
,
Zhang
B
,
Lu
LL
.
2015
.
Object-based crop classification with Landsat-MODIS enhanced time-series data
.
Remote Sensing
7
:
16091
16107
.

Lilley
JM
,
Flohr
BM
,
Whish
JPM
,
Farre
I
,
Kirkegaard
JA
.
2019
.
Defining optimal sowing and flowering periods for canola in Australia
.
Field Crops Research
235
:
118
128
.

Lira Melo de Oliveira Santos
C
,
Augusto Camargo Lamparelli
R
,
Kelly Dantas Araújo Figueiredo
G
,
Dupuy
S
,
Boury
J
,
Luciano
ACDS
,
Torres
RDS
,
le Maire
G
.
2019
.
Classification of crops, pastures, and tree plantations along the season with multi-sensor image time series in a subtropical agricultural region
.
Remote Sensing
11
:
334
.

Liu
K
,
Harrison
MT
,
Hunt
J
,
Angessa
TT
,
Meinke
H
,
Li
C
,
Tian
X
,
Zhou
M
.
2020
.
Identifying optimal sowing and flowering periods for barley in Australia: a modelling approach
.
Agricultural and Forest Meteorology
282–283
:
107871
.

Liu
MW
,
Ozdogan
M
,
Zhu
X
.
2014
.
Crop type classification by simultaneous use of satellite images of different resolutions
.
IEEE Transactions on Geoscience and Remote Sensing
52
:
3637
3649
.

Liu
X
,
Sun
Q
,
Meng
Y
,
Fu
M
,
Bourennane
S
.
2018
.
Hyperspectral image classification based on parameter-optimized 3D-CNNs combined with transfer learning and virtual samples
.
Remote Sensing
10
:
1425
.

Lobell
DB
,
Thau
D
,
Seifert
C
,
Engle
E
,
Little
B
.
2015
.
A scalable satellite-based crop yield mapper
.
Remote Sensing of Environment
164
:
324
333
.

Lobo
A
,
Chic
O
,
Casterad
A
.
1996
.
Classification of Mediterranean crops with multisensor data: per-pixel versus per-object statistics and image segmentation
.
International Journal of Remote Sensing
17
:
2385
2400
.

Lu
D
,
Weng
Q
.
2007
.
A survey of image classification methods and techniques for improving classification performance
.
International Journal of Remote Sensing
28
:
823
870
.

Macdonald
RB
.
1984
.
A summary of the history of the development of automated remote sensing for agricultural applications
.
IEEE Transactions on Geoscience and Remote Sensing
GE-22
:
473
482
.

Matvienko
I
,
Gasanov
M
,
Petrovskaia
A
,
Jana
RB
,
Pukalchik
M
,
Oseledets
I
.
2020
.
Bayesian aggregation improves traditional single image crop classification approaches
. arXiv:2004.03468v1.

McCabe
MF
,
Aragon
B
,
Houborg
R
,
Mascaro
J
.
2017
.
CubeSats in hydrology: ultrahigh-resolution insights into vegetation dynamics and terrestrial evaporation
.
Water Resources Research
53
:
10017
10024
.

McGrath
JM
,
Lobell
DB
.
2013
.
Regional disparities in the CO2 fertilization effect and implications for crop yields
.
Environmental Research Letters
8
:
014054
.

McNairn
H
,
Champagne
C
,
Shang
J
,
Holmstrom
D
,
Reichert
G
.
2009
.
Integration of optical and Synthetic Aperture Radar (SAR) imagery for delivering operational annual crop inventories
.
ISPRS Journal of Photogrammetry and Remote Sensing
64
:
434
449
.

McNairn
H
,
Ellis
J
,
Van Der Sanden
JJ
,
Hirose
T
,
Brown
RJ
.
2002
.
Providing crop information using RADARSAT-1 and satellite optical imagery
.
International Journal of Remote Sensing
23
:
851
870
.

Medhavy
TT
,
Sharma
T
,
Dubey
RP
,
Hooda
RS
,
Mothikumar
KE
,
Yadav
M
,
Manchanda
ML
,
Ruhal
DS
,
Khera
AP
,
Jarwal
SD
.
1993
.
Crop classification accuracy as influenced by training strategy, data transformation and spatial resolution of data
.
Journal of the Indian Society of Remote Sensing
21
:
21
28
.

Messina
CD
,
Technow
F
,
Tang
T
,
Totir
RL
,
Gho
C
,
Cooper
M
.
2018
.
Leveraging biological insight and environmental variation to improve phenotypic prediction: integrating crop growth models (CGM) with whole genome prediction (WGP)
.
European Journal of Agronomy
100
:
151
162
.

Misra
PN
,
Wheeler
SG
.
1978
.
Crop classification with LANDSAT multispectral scanner data
.
Pattern Recognition
10
:
1
13
.

Mountrakis
G
,
Im
J
,
Ogole
C
.
2011
.
Support vector machines in remote sensing: a review
.
ISPRS Journal of Photogrammetry and Remote Sensing
66
:
247
259
.

Nasrallah
A
,
Baghdadi
N
,
Mhawej
M
,
Faour
G
,
Darwish
T
,
Belhouchette
H
,
Darwich
S
.
2018
.
A novel approach for mapping wheat areas using high resolution Sentinel-2 images
.
Sensors (Basel Switzerland)
18
:
2089
.

Ndikumana
E
,
Ho Tong Minh
D
,
Baghdadi
N
,
Courault
D
,
Hossard
L
.
2018
.
Deep recurrent neural network for agricultural classification using multitemporal SAR Sentinel-1 for Camargue, France
.
Remote Sensing
10
:
1217
.

Nelson
R
.
2018
.
The future of public sector forecasting in Australian agriculture.
ABARES Research Report 18.14.
Canberra, Australia
:
Department of Agriculture and Water Resources
.

Niel
TGV
,
McVicar
TR
.
2004
.
Current and potential uses of optical remote sensing in rice-based irrigation systems: a review
.
Australian Journal of Agricultural Research
55
:
155
185
.

Orynbaikyzy
A
,
Gessner
U
,
Conrad
C
.
2019
.
Crop type classification using a combination of optical and radar remote sensing data: a review
.
International Journal of Remote Sensing
40
:
6553
6595
.

Pahlevan
N
,
Sarkar
S
,
Franz
BA
,
Balasubramanian
SV
,
He
J
.
2017
.
Sentinel-2 MultiSpectral Instrument (MSI) data processing for aquatic science applications: demonstrations and validations
.
Remote Sensing of Environment
201
:
47
56
.

Palchowdhuri
Y
,
Valcarce-Diñeiro
R
,
King
P
,
Sanabria-Soto
M
.
2018
.
Classification of multi-temporal spectral indices for crop type mapping: a case study in Coalville, UK
.
The Journal of Agricultural Science
156
:
24
36
.

Peña-Barragán
JM
,
Ngugi
MK
,
Plant
RE
,
Six
J
.
2011
.
Object-based crop identification using multiple vegetation indices, textural features and crop phenology
.
Remote Sensing of Environment
115
:
1301
1316
.

Pittman
K
,
Hansen
MC
,
Becker-Reshef
I
,
Potapov
PV
,
Justice
CO
.
2010
.
Estimating global cropland extent with multi-year MODIS data
.
Remote Sensing
2
:
1844
1863
.

Potgieter
AB
,
Apan
A
,
Dunn
P
,
Hammer
G
.
2007
.
Estimating crop area using seasonal time series of enhanced vegetation index from MODIS satellite imagery
.
Australian Journal of Agricultural Research
58
:
316
325
.

Potgieter
AB
,
Apan
A
,
Hammer
G
,
Dunn
P
.
2010
.
Early-season crop area estimates for winter crops in NE Australia using MODIS satellite imagery
.
ISPRS Journal of Photogrammetry and Remote Sensing
65
:
380
387
.

Potgieter
AB
,
Apan
A
,
Hammer
G
,
Dunn
P
.
2011
.
Estimating winter crop area across seasons and regions using time-sequential MODIS imagery
.
International Journal of Remote Sensing
32
:
4281
4310
.

Potgieter
AB
,
George-Jaeggli
B
,
Chapman
SC
,
Laws
K
,
Suárez Cadavid
LA
,
Wixted
J
,
Watson
J
,
Eldridge
M
,
Jordan
DR
,
Hammer
GL
.
2017
.
Multi-spectral imaging from an unmanned aerial vehicle enables the assessment of seasonal leaf area dynamics of sorghum breeding lines
.
Frontiers in Plant Science
8
:
1532
.

Potgieter
AB
,
Hammer
GL
,
Doherty
A
,
de Voil
P
.
2006
.
Oz-wheat: a regional-scale crop yield simulation model for Australian wheat
. In:
Information series
.
Brisbane, Australia
:
Queensland Department of Primary Industries & Fisheries
,
20
.

Potgieter
AB
,
Lawson
K
,
Huete
AR
.
2013
.
Determining crop acreage estimates for specific winter crops using shape attributes from sequential MODIS imagery
.
International Journal of Applied Earth Observation and Geoinformation
23
:
254
263
.

Potgieter
AB
,
Lobell
DB
,
Hammer
GL
,
Jordan
DR
,
Davis
P
,
Brider
J
.
2016
.
Yield trends under varying environmental conditions for sorghum and wheat across Australia
.
Agricultural and Forest Meteorology
228-229
:
276
285
.

Puissant
A
,
Hirsch
J
,
Weber
C
.
2005
.
The utility of texture analysis to improve per-pixel classification for high to very high spatial resolution imagery
.
International Journal of Remote Sensing
26
:
733
745
.

Ray
DK
,
Ramankutty
N
,
Mueller
ND
,
West
PC
,
Foley
JA
.
2012
.
Recent patterns of crop yield growth and stagnation
.
Nature Communications
3
:
1293
.

Roerink
GJ
,
Menenti
M
,
Verhoef
W
.
2000
.
Reconstructing cloudfree NDVI composites using Fourier analysis of time series
.
International Journal of Remote Sensing
21
:
1911
1917
.

Rustowicz
RM
.
2017
.
Crop classification with multi-temporal satellite imagery.
Stanford, CA
:
Stanford University
.

Sadeh
Y
,
Zhu
X
,
Chenu
K
,
Dunkerley
D
.
2019
.
Sowing date detection at the field scale using CubeSats remote sensing
.
Computers and Electronics in Agriculture
157
:
568
580
.

Sadeh
Y
,
Zhu
X
,
Dunkerley
D
,
Walker
JP
,
Zhang
Y
,
Rozenstein
O
,
Manivasagam
VS
,
Chenu
K
.
2020
.
Sentinel-2 and Planetscope data fusion into daily 3 m images for leaf area index monitoring
. In:
IEEE International Geoscience and Remote Sensing Symposium,
19–24 July 2020
,
Waikoloa, HI
.

Sadeh
Y
,
Zhu
X
,
Dunkerley
D
,
Walker
JP
,
Zhang
Y
,
Rozenstein
O
,
Manivasagam
VS
,
Chenu
K
.
2021
.
Fusion of Sentinel-2 and PlanetScope time-series data into daily 3 m surface reflectance and wheat LAI monitoring
.
International Journal of Applied Earth Observation and Geoinformation
96
:
102260
.

Saini
R
,
Ghosh
SK
.
2018
.
Crop classification on single date Sentinel-2 imagery using random forest and support vector machine
. In:
ISPRS TC V Mid-term Symposium
,
Dehradun, India
.

Saini
R
,
Ghosh
SK
.
2019
.
Crop classification in a heterogeneous agricultural environment using ensemble classifiers and single-date Sentinel-2A imagery
.
Geocarto International
doi:10.1080/10106049.2019.1700556.

Sakamoto
T
,
Yokozawa
M
,
Toritani
H
,
Shibayama
M
,
Ishitsuka
N
,
Ohno
H
.
2005
.
A crop phenology detection method using time-series MODIS data
.
Remote Sensing of Environment
96
:
366
374
.

Simonneaux
V
,
Duchemin
B
,
Helson
D
,
Er-Raki
S
,
Olioso
A
,
Chehbouni
AG
.
2008
.
The use of high-resolution image time series for crop classification and evapotranspiration estimate over an irrigated area in central Morocco
.
International Journal of Remote Sensing
29
:
95
116
.

Singha
M
,
Wu
B
,
Zhang
M
.
2016
.
An object-based paddy rice classification using multi-spectral data and crop phenology in Assam, Northeast India
.
Remote Sensing
8
:
479
.

Son
N-T
,
Chen
C-F
,
Chen
C-R
,
Duc
H-N
,
Chang
L-Y
.
2014
.
A phenology-based classification of time-series MODIS data for rice crop monitoring in Mekong Delta, Vietnam
.
Remote Sensing
6
:
135
156
.

Tatsumi
K
,
Yamashiki
Y
,
Canales Torres
MA
,
Taipe
CLR
.
2015
.
Crop classification of upland fields using random forest of time-series Landsat 7 ETM+ data
.
Computers and Electronics in Agriculture
115
:
171
179
.

Teluguntla
P
,
Thenkabail
PS
,
Oliphant
A
,
Xiong
J
,
Gumma
MK
,
Congalton
RG
,
Yadav
K
,
Huete
A
.
2018a
.
A 30-m Landsat-derived cropland extent product of Australia and China using random forest machine learning algorithm on Google Earth Engine cloud computing platform
.
ISPRS Journal of Photogrammetry and Remote Sensing
144
:
325
340
.

Teluguntla
P
,
Thenkabail
PS
,
Oliphant
A
,
Xiong
JN
,
Gumma
MK
,
Congalton
RG
,
Yadav
K
,
Huete
A
.
2018b
.
A 30-m Landsat-derived cropland extent product of Australia and China using random forest machine learning algorithm on Google Earth Engine cloud computing platform
.
ISPRS Journal of Photogrammetry and Remote Sensing
144
:
325
340
.

Upadhyay
G
,
Ray
SS
,
Panigrahy
S
.
2008
.
Derivation of crop phenological parameters using multi-date SPOT-VGT-NDVI data: a case study for Punjab
.
Photonirvachak-Journal of the Indian Society of Remote Sensing
36
:
37
50
.

Van Niel
TG
,
McVicar
TR
.
2004
.
Determining temporal windows for crop discrimination with remote sensing: a case study in south-eastern Australia
.
Computers and Electronics in Agriculture
45
:
91
108
.

Verhoef
W
,
Menenti
M
,
Azzali
S
.
1996
.
A colour composite of NOAA-AVHRR-NDVI based on time series analysis (1981–1992)
.
International Journal of Remote Sensing
17
:
231
235
.

Waldhoff
G
,
Lussem
U
,
Bareth
G
.
2017
.
Multi-data approach for remote sensing-based regional crop rotation mapping: a case study for the Rur catchment, Germany
.
International Journal of Applied Earth Observation and Geoinformation
61
:
55
69
.

Waldner
F
,
Horan
H
,
Chen
Y
,
Hochman
Z
.
2019
.
High temporal resolution of leaf area data improves empirical estimation of grain yield
.
Scientific Reports
9
:
15714
.

Wan
S
,
Chang
S-H
.
2019
.
Crop classification with WorldView-2 imagery using support vector machine comparing texture analysis approaches and grey relational analysis in Jianan Plain, Taiwan
.
International Journal of Remote Sensing
40
:
8076
8092
.

Wang
X
,
Hunt
C
,
Cruickshank
A
,
Mace
E
,
Hammer
G
,
Jordan
D
.
2020
.
The impacts of flowering time and tillering on grain yield of sorghum hybrids across diverse environments
.
Agronomy
10
:
135
.

Wang
C
,
Zhong
C
,
Yang
Z
.
2014
.
Assessing bioenergy-driven agricultural land use change and biomass quantities in the U.S. Midwest with MODIS time series
.
Journal of Applied Remote Sensing
8:85198. doi:10.1117/1.JRS.8.085198.

Wardlow
BD
,
Egbert
SL
.
2008
.
Large-area crop mapping using time-series MODIS 250 m NDVI data: an assessment for the U.S. Central Great Plains
.
Remote Sensing of Environment
112
:
1096
1116
.

Watson
J
,
Zheng
B
,
Chapman
S
,
Chenu
K
.
2017
.
Projected impact of future climate on water-stress patterns across the Australian wheatbelt
.
Journal of Experimental Botany
68
:
5907
5921
.

Wei
LF
,
Yu
M
,
Zhong
YF
,
Zhao
J
,
Liang
YJ
,
Hu
X
.
2019
.
Spatial-spectral fusion based on conditional random fields for the fine classification of crops in UAV-borne hyperspectral remote sensing imagery
.
Remote Sensing
11
:
780
.

Whish
JPM
,
Lilley
JM
,
Morrison
MJ
,
Cocks
B
,
Bullock
M
.
2020
.
Vernalisation in Australian spring canola explains variable flowering responses
.
Field Crops Research
258
:
107968
.

Wilson
JH
,
Zhang
CH
,
Kovacs
JM
.
2014
.
Separating crop species in Northeastern Ontario using hyperspectral data
.
Remote Sensing
6
:
925
945
.

Wulder
MA
,
White
JC
,
Loveland
TR
,
Woodcock
CE
,
Belward
AS
,
Cohen
WB
,
Fosnight
EA
,
Shaw
J
,
Masek
JG
,
Roy
DP
.
2016
.
The global Landsat archive: status, consolidation, and direction
.
Remote Sensing of Environment
185
:
271
283
.

Xavier
AC
,
Rudorff
BFT
,
Shimabukuro
YE
,
Berka
LMS
,
Moreira
MA
.
2006
.
Multi-temporal analysis of MODIS data to classify sugarcane crop
.
International Journal of Remote Sensing
27
:
755
768
.

Xu
WC
,
Lan
YB
,
Li
YH
,
Luo
YF
,
He
ZY
.
2019
.
Classification method of cultivated land based on UAV visible light remote sensing
.
International Journal of Agricultural and Biological Engineering
12
:
103
109
.

Xue
QW
,
Weiss
A
,
Baenziger
PS
.
2004
.
Predicting phenological development in winter wheat
.
Climate Research
25
:
243
252
.

Yamashita
R
,
Nishio
M
,
Do
RKG
,
Togashi
K
.
2018
.
Convolutional neural networks: an overview and application in radiology
.
Insights into Imaging
9
:
611
629
.

Yang
C
,
Everitt
JH
,
Murden
D
.
2011
.
Evaluating high resolution SPOT 5 satellite imagery for crop identification
.
Computers and Electronics in Agriculture
75
:
347
354
.

Zadoks
J
,
Chang
T
,
Konzak
C
.
1974
.
A decimal code for the growth stages of cereals
.
Weed Research
14
:
415
421
.

Zeng
L
,
Wardlow
BD
,
Xiang
D
,
Hu
S
,
Li
D
.
2020
.
A review of vegetation phenological metrics extraction using time-series multispectral satellite data
.
Remote Sensing of Environment
237
:
111511
.

Zhang
X
,
Friedl
MA
,
Schaaf
CB
,
Strahler
AH
,
Hodges
JCF
,
Gao
F
,
Reed
BC
,
Huete
A
.
2003
.
Monitoring vegetation phenology using MODIS
.
Remote Sensing of Environment
84
:
471
475
.

Zhang
X-W
,
Liu
J-F
,
Qin
Z
,
Qin
F
.
2019
.
Winter wheat identification by integrating spectral and temporal information derived from multi-resolution remote sensing data
.
Journal of Integrative Agriculture
18
:
2628
2643
.

Zhao
LC
,
Shi
Y
,
Liu
B
,
Hovis
C
,
Duan
YL
,
Shi
ZC
.
2019
.
Finer classification of crops by fusing UAV images and Sentinel-2A data
.
Remote Sensing
11
:
3012
.

Zhao
J
,
Zhong
YF
,
Hu
X
,
Wei
LF
,
Zhang
LP
.
2020
.
A robust spectral-spatial approach to identifying heterogeneous crops using remote sensing imagery with high spectral and spatial resolutions
.
Remote Sensing of Environment
239
:111605
.

Zheng
B
,
Biddulph
B
,
Li
D
,
Kuchel
H
,
Chapman
S
.
2013
.
Quantification of the effects of VRN1 and Ppd-D1 to predict spring wheat (Triticum aestivum) heading time across diverse environments
.
Journal of Experimental Botany
64
:
3747
3761
.

Zheng
B
,
Chapman
S
,
Chenu
K
.
2018
.
The value of tactical adaptation to El Niño–Southern Oscillation for East Australian wheat
.
Climate
6
:
77
.

Zheng
B
,
Chenu
K
,
Chapman
SC
.
2016
.
Velocity of temperature and flowering time in wheat - assisting breeders to keep pace with climate change
.
Global Change Biology
22
:
921
933
.

Zheng
B
,
Chenu
K
,
Fernanda Dreccer
M
,
Chapman
SC
.
2012a
.
Breeding for the future: what are the potential impacts of future frost and heat events on sowing and flowering time requirements for Australian bread wheat (Triticum aestivum) varieties?
Global Change Biology
18
:
2899
2914
.

Zheng
B
,
Chenu
K
,
Fernanda Dreccer
M
,
Chapman
SC
.
2012b
.
Breeding for the future: what are the potential impacts of future frost and heat events on sowing and flowering time requirements for Australian bread wheat (Triticum aestivum) varieties?
Global Change Biology
18
:
2899
2914
.

Zhong
L
,
Gong
P
,
Biging
GS
.
2014
.
Efficient corn and soybean mapping with temporal extendability: a multi-year experiment using Landsat imagery
.
Remote Sensing of Environment
140
:
1
13
.

Zhong
L
,
Hu
L
,
Zhou
H
.
2019
.
Deep learning based multi-temporal crop classification
.
Remote Sensing of Environment
221
:
430
443
.

Zhu
L
,
Radeloff
VC
,
Ives
AR
.
2017
.
Improving the mapping of crop types in the Midwestern U.S. by fusing Landsat and MODIS satellite data
.
International Journal of Applied Earth Observation and Geoinformation
58
:
1
11
.

Zurita-Milla
R
,
Izquierdo-Verdiguier
E
,
de By
RA
.
2017
.
Identifying crops in smallholder farms using time series of WorldView-2 images
. In:
2017 9th International Workshop on the Analysis of Multitemporal Remote Sensing Images (MultiTemp)
,
1
3
.

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.
Handling Editor: Steve Long
Steve Long
Handling Editor
Search for other works by this author on: