Byte-size Information to Chew on

Synopsis: Intelligent Monitoring of Stress Induced by Water Deficiency in Plants Using Deep Learning

Rohan Wadhawan
7 min readFeb 23, 2022

Title: Intelligent Monitoring of Stress Induced by Water Deficiency in Plants Using Deep Learning (2021)

Authors: Shiva Azimi, Rohan Wadhawan, Tapan K. Gandhi

Publication Link: https://ieeexplore.ieee.org/abstract/document/9541228

Pre-Print Link: https://arxiv.org/abs/2104.07911

Keywords: Water Stress Monitoring, Plant Phenotyping, Agritech, Computer vision, Deep Learning (DL), Convolutional Neural Network, Long Short-Term Memory, Spatiotemporal Analysis, Explainable AI

Summary

The article is structured as follows:

Detailed analysis of topics like Convolutional Neural Network (CNN) [1], Long Short-Term Memory (LSTM) [2], and Grad-CAM visualization [3] are beyond the scope of this article. But I have provided links to relevant resources, which will come in handy while reading the paper. Further, the images and results shared here are taken from the original manuscript.

Problem Statement

To develop a technique to analyze and categorize the progressive visual changes induced in a plant because of water stress. It should allow for early detection and recovery of plants under stress due to water deficiency.

Paper Contribution

  • Proposed a deep learning pipeline that employs a variant of the CNN-LSTM network to learn spatiotemporal patterns from the shoot of a plant and use it for water stress classification.
  • Applied and validated this pipeline for the specific case of the Chickpea plant, as it is a crucial crop for ensuring food security in developing countries [4] and an excellent source of key nutrients that fulfil dietary needs [5]. Most importantly, it is greatly affected by water deficiency [6,7].
  • Curated a Chickpea plant shoot image dataset consisting of two varieties, namely JG-62 and Pusa-372, subjected to three different water stress conditions.
JG-62 plant pot on the left, Pusa-372 plant pot on the right
  • The proposed temporal technique achieves a ceiling level water stress classification accuracy of 98.32% for JG and 97.5% for Pusa, outperforming the best reported CNN (only spatial) technique by at least 14%.
  • Under noisy conditions, the average model accuracy dips by at most 2.5%, with a small standard deviation about the new accuracy.
  • For all practical purposes, the proposed technique is independent of the underlying CNN feature extractor and the chickpea species in consideration.
  • Ablation study is performed to examine the affect of varying the number of temporal session image data used for training on water stress prediction.

Overview of Methodology

  • The Chickpea plant shoot image dataset of JG and Pusa varieties was collected in collaboration with the National Institute of Plant Genome Research (NIPGR). Overall, this dataset has 7680 images, with plant samples equally divided into three water stress categories: Young Seedling, Before Flowering, and Control, based on when stress was applied during its growth.
  • Shoot images were preferred over tracking individual plant organs like leaves, flowers, stem, etc, as it provided resource efficiency during capture and observation and did not require destructive or invasive techniques for monitoring them.
  • The deep learning pipeline consists of four stages: image input sequence, data augmentation, CNN-LSTM network, water stress class output.
Deep learning pipeline for water stress classification from plant shoot images
Deep learning pipeline for water stress classification from plant shoot images
  • Data augmentation in the form of linear transformations like rotation, translation, and flipping and Gaussian noise to simulate sensor data acquisition noise.
Gaussian noise added to images of a given JG-62 image sequence.
Gaussian noise added to images of a given JG-62 image sequence.
Gaussian noise added to images of a given Pusa-372 image sequence.
Gaussian noise added to images of a given Pusa-372 image sequence.
  • CNN-LSTM network is composed up of either a VGG16 [8] or Inception-V3 [9] CNN network feature extractor, which is used in a time-distributed manner (shared weights) across the LSTM sequential cells that are equal to the number of session data used for prediction. The following figure demonstrates the network architecture:
CNN-LSTM architecture used for water stress classification — Before Flowering (BF), Control (C), Young Seedling (YS)
CNN-LSTM architecture used for water stress classification — Before Flowering (BF), Control (C), Young Seedling (YS)
  • Models are trained using a stratified five-fold cross-validation strategy. Average accuracy, macro-sensitivity, macro-specificity, and macro-precision are used as evaluation metrics.

Conclusions

  • The proposed temporal technique achieves a ceiling-level water stress classification accuracy on JG and Pusa chickpea varieties and significantly outperforms the best reported CNN technique.
  • The trained CNN-LSTM model robust has a minimal reduction in model performance in the presence of noisy input and remains reasonably consistent under varying degrees of noise.
  • Grad-CAM visualization reveals that this technique focuses on the plant shoot for stress classification, with a variable area in focus depending on the size and shape of the shoot.
Grad-CAM visualization of JG-62 images, with respect to Inception V3 CNN feature extractor. Figures (a), (b) belong to Young Seedling; (c), (d) belong to Before Flowering; (e), (f) belong to Control.
Grad-CAM visualization of JG-62 images, with respect to Inception V3 CNN feature extractor. Figures (a), (b) belong to Young Seedling; (c), (d) belong to Before Flowering; (e), (f) belong to Control.
Grad-CAM visualization of Pusa-372 images, with respect to Inception V3 CNN feature extractor. Figures (a), (b) belong to Young Seedling; (c), (d) belong to Before Flowering; (e), (f) belong to Control.
Grad-CAM visualization of Pusa-372 images, with respect to Inception V3 CNN feature extractor. Figures (a), (b) belong to Young Seedling; (c), (d) belong to Before Flowering; (e), (f) belong to Control.
  • It captures the water stress sensitivity characteristics of the chickpea species. JG, being water stress-sensitive, produces more visual changes to plant shoots and its corresponding model shows better metric values than Pusa, which is stress-tolerant and produces milder changes.
  • Despite the difference in the water stress characteristic of the chickpea species, the proposed technique works equally well for both of them.
Visualizing (a) average accuracy, (b) macro-sensitivity, (c)macro-specificity, and (d) macro-precision of models trained on different chickpea plant species — feature vector combination over the number of sessions data for training.
Visualizing (a) average accuracy, (b) macro-sensitivity, (c) macro-specificity, and (d) macro-precision of models trained on different chickpea plant species — feature vector combination over the number of sessions data for training.
  • Moreover, it is independent of the underlying CNN feature extractor for practical purposes.

Limitations

  • The proposed method has been validated on a controlled dataset that may require additional image noise removal techniques when applied in real world conditions.
  • The authors’ approach requires one plant per frame for accurate analysis because it has been trained on a dataset with one plant per image. For real-time deployment, plant instance detection from an image followed by its extraction may be required, increasing processing overheads.

Future Work

  • Water stress classification from images taken in the wild.
  • Design light-weight models that will be less compute-intensive and deployable on edge.
  • Possible integration with other IoT devices that are part of a precision agriculture automation system.

Applications

  • The proposed technique can be used for plant stress monitoring due to abiotic stress like water deficiency or any other form of stress that induces progressive visual changes in the plant shoot.
  • The DL-based water stress classification method can help farmers optimize irrigation, prevent unnecessary expenditure, and promote optimum productivity by ensuring good soil health.

References

  1. Y. LeCun et al., “Handwritten digit recognition with a back-propagation network,” in Proc. Adv. Neural Inf. Process. Syst., 1990, pp. 396–404.
  2. S. Hochreiter and J. Schmidhuber, “Long Short-Term Memory,” Neural Comput., vol. 9, no. 8, pp. 1735–1780, 1997.
  3. R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization,” in Proc. IEEE Int. Conf. Comput. Vis., Oct. 2017, pp. 618–626.
  4. M. Kumar, M. A. Yusuf, M. Nigam, and M. Kumar, “An Update on Genetic Modification of Chickpea for Increased Yield and Stress Tolerance,” Mol. Biotechnol., vol. 60, no. 8, pp. 651–663, Aug. 2018.
  5. A. Kumar, S. Nath, A. Kumar, A. K. Yadav, and D. Kumar, “Combining ability analysis for yield and yield contributing traits in chickpea (Cicer arietinum L.),” J. Pharmacognosy Phytochem., vol. 7, no. 1, pp. 2522–2527, 2018.
  6. V. Devasirvatham and D. Tan, “Impact of High Temperature and Drought Stresses on Chickpea Production,” Agronomy, vol. 8, no. 8, p. 145, Aug. 2018.
  7. S. D. Gupta, A. S. Manjri, and P. S. R. Kewat, “Effect of drought stress on carbohydrate content in drought tolerant and susceptible chickpea genotypes,” IJCS, vol. 6, no. 2, pp. 1674–1676, 2018.
  8. K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” 2014, arXiv:1409.1556.
  9. C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the Inception Architecture for Computer Vision,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2016, pp. 2818–2826.

Additional Resources

Thank you for reading this article! If you feel this post added a bit to your exabyte of knowledge, please show your appreciation by clicking on the clap icon and sharing it with whomsoever you think might benefit from this. Leave a comment below if you have any questions or find errors that might have slipped in.

Follow me in my journey of developing a Mental Map of AI research and its impact, get to know more about me at www.rohanwadhawan.com, and reach out to me on LinkedIn!

--

--

Rohan Wadhawan

Passionate about Computer Vision, Artificial Intelligence, Generative Learning, and Affective Computing. My LinkedIn https://www.linkedin.com/in/rohan-wadhawan/