We can process images taken from many different drone-and-camera combinations. The main requirement is being able to take a regularly-spaced set of pictures that adequately covers the entire area of interest. You’ll need an autonomous flight controller to do this – the Pixhawk by 3DR or the Ruby from uThere are good choices.
We recommend you set up your flight plan in a “lawnmower” pattern with the spacing between rows providing at least 60% image overlap (also known as sidelap). Many ground control software packages know the field of view of your camera and will set this spacing automatically. If your camera a triggered by the autopilot, set up the camera triggering to give about 60% inline overlap. At Agribotix we find it easier and less error prone to set the camera to take pictures every 2 seconds. When you are flying over very consistent imagery (e.g. a field of fully grown wheat), you’ll need to collect more images to enable us to give you good results.
It mainly depends on your budget and the payload of your aircraft. More pixels are always better, but we find that 10 megapixels is optimal. In general, we like cameras with an image sensor 5 megapixels or greater. Larger sensor sizes are always better, but the camera will be more expensive and heavier (the 1/2.3” sensor on the Canon and the GoPro is plenty good). Other than that, almost any camera will work, as long as we can get a geotag into the JPEG output file (see Georeferencing below). If you want to do NDVI imagery you’ll need to have a camera that shoots in the near infrared (NIR). There are several off-the-shelf NIR options, but at Agribotix we take regular visible cameras and hack them in house.
The list below is far from complete, but it will get you started.
Multirotors have a lot of vibration in their airframes, even with carefully balanced props, so you’ll want to use an isolation mount. For fixed wings we put our cameras on a foam rubber pad to get some degree of isolation but it’s not a big issue. There’s no need to have the camera pointing perfectly downward so you can lose the gimbal.
If you want to be able to see your results on a map (e.g. Google Earth) the main requirement is that we must be able to “geotag” each image. That means that the latitude and longitude must be written into the image file. We’d like altitude in there too but it’s not a requirement. There are lots of different options so we made up a handy flowsheet to help you navigate the process. Don’t need your final image georeferenced? Then there’s no geotagging requirement.
If you are using a 3D Robotics flight controller and you can sync the camera clock and the GPS clock (see the flowsheet), then our Field Extractor program does the trick. Field Extractor will geotag the images and send them on their way to our cloud computer. Refer to this document for instructions on using Field Extractor. If you aren’t using 3DR hardware, then zip your image files and upload them on the Agribotix Field Lens. If you have a flight log, please include that too.
We generate both Google Earth KMZ files and geoTIFF files. KMZ files can’t be beat for the ease of viewing, but they can’t be used by a lot of GIS software. That’s where the geoTIFF files come in. We typically downsample our product files to 30cm per pixel. This results in a file size of around 10 MB for a 160-acre field. If desired, we can deliver higher resolution images, up to the ground sampling distance of the original photos, upon request. But for a large field the highest resolution file sizes are so massive that they crash most image-processing software. For smaller test plots, etc. finer resolution isn’t a problem. Give us any special instructions regarding higher resolution in the Notes field in the upload process. The completed images will come back to you through your password-protected page on our web site.
We use the NIR and green channels. Traditional NDVI, developed by NASA for satellite images, used NIR and red. However, at lower altitudes there is no significant scattering of the shorter wavelength light so green works just fine. We usually use a variant of NDVI, called DVI, which we find stands up the better in ground truthing. In DVI, rather than normalizing by the sum of the NIR and green, we expand the histogram. It is sort of the same thing as normalizing, but from lots of hours spent walking through muddy fields we find the results are more representative of the crop on the ground. Of course we can process your images using the NDVI algorithm if you’d prefer.
Most of our customers prefer a simple “stoplight” LUT – green, yellow, red, and black. We will be adding a library of other LUTs soon. Please contact us if you have a specific requirement.
Unless you have taken the time and effort to put calibrated reflectance panels in your field, no. However, our experience is that relative NDVI gives the grower all the information needed with much less hassle on the ground.
Yes. The shape files take more work so it is an added cost option. Contact us if you would like shape files so we process them in a way that meets your needs.
We can create DEMs, but to get any useful accuracy you need to create ground control points. If your mission requires DEMs then let’s talk.
We typically obtain lateral accuracies of a few meters. With ground control points (GCP) that can be reduced to a few centimeters, but creating GCPs increases the level of effort in the field significantly. It is only a modest increase in effort to process images with GCPs.
Usually we turn the process around in a few hours. Images that arrive after close of business will be processed first thing the next day.
Contact Us for information on our image processing services.