Updated February 20, 2024:
The imagery was imported in python using raster IO library and its meta data was explored to retrieve information about the raster, understanding the data's properties, and conducting geospatial analysis, or visualizing the raster. Afterwards the boundary images were superimposed on raster to extract row by row spectral signatures of plots along with its unique plot ids, areas, and coordinates. Since the distance between GPS points varied between different plots, the resulting line string lengths differed. To standardize the buffer lengths for each plot, all buffers were resized to the same length. In the last step zonal statistics, table was calculated consisting of plot ID, row ID, unique ID, a polygon object with coordinates of the four plot corners, centroid, area, length, width, pixel count, and VI (Vegetation Index) values. It was observed that the workflow reduced the need for many inputs to adjust plot boundaries. Overlapping canopies or crop lodging from adjacent plots or crop effects do not limit the effectiveness of the workflow. Reproducible workflow was obtained both single-row and multiple-row plot boundaries spectral data collection. The plot boundary extraction methodology presented in the study provides accurate and efficient plot extraction method. This research methodology used simpler existing algorithms to extract spatial signatures from imagery and plot boundary extraction from high accuracy precision planter. In the coming year, we plan to merge field-obtained phenotype plot data with spectral and spatial data files obtained using the above-mentioned pipeline.
View uploaded report
We used data from the farm equipment that plants crops to create a detailed map of where each row of crops was planted. This data, collected at a speedy rate, gave us highly accurate location information thanks to advanced GPS technology. We then turned these planting locations into lines to represent the paths the planting machine took, and we created areas around these lines. Next, we overlaid this planting information onto high-resolution images of the field taken from a low-flying aircraft to see how the crops responded.
We have completed all the necessary steps, from the initial planting data to creating these areas around each row of crops, then overlaid them on pre-cleaned UAV image data to extract plot values. Afterward, we saved them in a special file format. Additional data was gathered, including experimental design, field map layout, coordinates of specific areas of interest, and phenotype data. After all the processing, a file was obtained, which included information about each planting area, like its ID, location, shape, size, and some values that tell us how healthy the plants are. We noticed that this way of doing things meant we didn't need to make many adjustments to the planting areas.
The method we used to figure out where the planting areas are is pretty accurate and doesn't require a lot of complicated steps. We used simple computer tricks to get information from pictures and data from the planting machine to find the planting areas. This automated data pipeline can help farmers and breeders to analyze large-scale plot-level data to extract valuable knowledge on the performance of different breeding varieties. This also helps to analyze greater spatial and temporal resolution data because the time to analyze data has been reduced to minutes from days. Overall, this automated data analytic pipeline can help hasten the process of selecting drought and disease-resistant varieties, which will help growers' operations become more profitable and sustainable in the long term.