Classification of soil surface texture using high-resolution RGB images captured under uncontrolled field conditions

dc.contributor.advisorBais, Abdul
dc.contributor.authorBabalola, Ekunayo-Oluwabami Oreoluwa
dc.contributor.committeememberWang, Zhanle (Gerald)
dc.contributor.externalexaminerPeng, Wei
dc.date.accessioned2024-10-11T17:24:47Z
dc.date.available2024-10-11T17:24:47Z
dc.date.issued2023-09
dc.descriptionA Thesis Submitted to the Faculty of Graduate Studies and Research In Partial Fulfillment of the Requirements for the Degree of Master of Applied Science in Electronic Systems Engineering, University of Regina. xiv, 116 p.
dc.description.abstractUnderstanding the properties of soil and its impact on the environment and farming practices requires accurately classifying its texture. Accurate soil texture classification can optimize soil nutrient levels and improve land management. This study proposes a framework that uses images captured under Uncontrolled Field Conditions (UFC) to classify soil texture for farmlands accurately. UFC images are captured in varying ambient light and environmental conditions, which can introduce unwanted elements such as shadows and varying lighting. Our framework uses image-processing techniques, texture-enhancing methods, and deep learning to process and classify these soils accurately. First, we process the soil using semantic segmentation to eliminate all non-soil pixels. We compare Segmentation Network (SegNet), U-shaped Neural Network (UNet), Pyramid Scene Parsing Network (PSPNet), and DeepLab v3+ models to choose the best for semantic segmentation. The trained segmentation model produces masks used to eliminate non-soil pixels from the images. This process produces new images with random 0 pixel clusters that would negatively disrupt texture information, and so next, we split the new images to eliminate all 0 pixel clusters while preserving only soil pixels. We then perform texture enhancement on the images before feeding them into the classification network. We design and use an improved network called EfficientCNN for classification to use a reduced number of parameters while producing maximum accuracy. We also compare this model with Residual Network (ResNet 50), EfficientNetB7 and Inception v3. EfficientCNN architecture uses just 5.9 million parameters and produces an accuracy of 84.783%, while Inception v3 uses 21.7 million parameters and produces an accuracy of 85.621%. EfficientCNN produces only 0.838% less accuracy than Inception v3. Our results contribute to agriculture and soil science studies.
dc.description.authorstatusStudenten
dc.description.peerreviewyesen
dc.identifier.urihttps://hdl.handle.net/10294/16424
dc.language.isoenen
dc.publisherFaculty of Graduate Studies and Research, University of Reginaen
dc.titleClassification of soil surface texture using high-resolution RGB images captured under uncontrolled field conditions
dc.typemaster thesisen
thesis.degree.departmentFaculty of Engineering and Applied Science
thesis.degree.disciplineEngineering - Electronic Systems
thesis.degree.grantorFaculty of Graduate Studies and Research, University of Reginaen
thesis.degree.levelMaster'sen
thesis.degree.nameMaster of Applied Science (MASc)

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Babalola, Ekunayo-OluwabamiOreoluwa_MASc_ESE_Thesis_2024Spring.pdf
Size:
38.35 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
2.22 KB
Format:
Item-specific license agreed upon to submission
Description: