Frequently Asked Questions

What is leaf area index?

Leaf area index (LAI) is usually defined as the one sided area of leaf material per unit area of ground. So it is a dimensionless number that gives us an idea of the integrated leaf density of a forest or crop. LAI is an important ancillary variable in dynamic vegetation models and plant ecology in general. There are also various satellite retrievals of LAI based on visible near-infrared reflectance spectroscopy. There is a nice Wikipedia entry on LAI here.

Why use photography to estimate leaf area index?

In the field, the best way to measure LAI is destructively. So literally just pull the leaf material of the shoots and measure them. However that is rarely done in practice, and instead we use indirect methods. There are a few specialized instruments that do this and laser scanning can also be applied with some caveats.

Levelled fish-eye lenses attached to SLR cameras also work. Cameras have proved popular with researchers as the barrier to entry is not so high as with laser scanning for example.

Imagery is taken by pointing a fisheye equipped camera upwards towards the sky below a canopy. Then captured imagery is segmented between sky and vegetation to estimate the canopy gap fraction. The gap fraction is then plugged into a formula to estimate the leaf area index. There are various software packages to do this; one of the most popular is hemisfer which also contains further reading material on the topic.

What is a spherical panorama exactly and how does it differ from a hemispherical photo?

A spherical panorma is what we get when we select panorama mode on our phone. In 2D, it looks like this:

hemi-placeholder

A hemispherical photograph is what we get when we attach a fish eye lens to an SLR camera. It looks like this:

hemi-placeholder

In smart phones, spherical panoramas are produced from stitching together many images covering a hypothetical sphere with the camera at its centre. The stiched image is then typically saved using an equirectangular projection which projects the 3D sphere onto the 2D plane. It is this projection that results in the distortions at the bottom and top of the image, as objects are stretched to cover the extent of the plane.

If, and this is a big if, the spherical panorarama covers the whole 360 degree field of view, it should be theoretically possible to map from the panorama to the upwards looking hemisphere. Then we can use fish eye gap fraction theory to estimate LAI or light environment. You may have noticed the similarity between those two images above? That is because the fisheye image is reprojected from the spherical panorama, not captured with a fish eye lens.

So to convert beween our two projections do we just convert between our (x,y) coordinates using the standard polar equations?

That is the basic idea, but there is a complication.

As there is not a 1:1 correspondance between the pixels in the equidistant projection of the spherical image and hemispherical fisheye imagery, we need to apply an intepolation method to complete the mapping. In the web application we use skimage’s warp method, but this can also be done in other software or even coded yourself without too much difficulty.

Do I need a special camera to take a spherical panorama for LAI estimation?

There are specialist cameras available such as the Ricoh Theta, but you do not need to use them to generate panoramas. You can use a mobile/cell phone which stitches together many individual pictures. Just make sure you cover the whole of the upper hemisphere. You can also use a drone. The Parrot ANAFI is a small, low-cost drone with the capability to point its gimbal directly upwards and capture true 360 panoramas. Other drones are typically limited by their gimbal tilt angles and the upper hemisphere may not be fully covered.

Presumably a computer generated spherical panorama is lower quality than a good old hemispherical photo, so what are you really up to here?

You presume wrong! Yes, there can be stitching errors in panoramas however the resolution (number of pixels per hemisphere) in a reprojected panorama is much greater than in a classical digital hemispherical camera + fisheye.

Apart from the pixel count issue, which is not really a deal breaker either way, there are two very good reasons for wanting to use spherical panoramas over trad systems:

  1. Cost. Spherical panoramas can be captured by most modern smart phones. A decent DHC set-up is into the thousands of euros when considering quality camera, lens and gimbal.
  2. Vertical profiles via drones. DHC systems are limited in access to the ground, canopy towers, or in more recent years to heavy weight drone systems.In contrast, spherical panoramic imagery can be captured by very low weight (<500 g) drone systems . This is a game changer in hemispherical photography and solves the well known “leaning out of a metal tower holding several kg of heavy camera equipment” problem!

Why don’t you just use LIDAR?

See point 1 above.

But why not just calculate parameters directly from the spherical imagery?

This is an excellent point. The main reason to convert to the hemispherical projection is to use extant software and theory which is based on a specific projection. However there is no reason why we should restrict our use of panoramic photography for these purposes only.