1. Microoptical Artificial Compound Eyes (2005); [chapters
1-2, 27 pages]:
-
Apposition
Compound Eye: Consists of an array of microlenses on a curved surface. Each
microlens is associated with a small group of photo receptors in its focal
plane (where the light is focused). This then forms one optical channel =
ommatidium. Ommatidium = microlens. Pigments form opaque walls to prevent light
that is focused by one microlens to be received by an adjacent channel’s
receptor.
- Interommatidial Angle: ∆Φ is used to sample the visual surrounding of
the insect.
∆Φ = Diameter of micro lens (D)/ Radius of ommatidia to the center (R
EYE).
-
Nyquist angular frequency: Period of the
finest pattern that can be resolved is 2∆Φ.
- Angular sensitivity function: Angular
distance Φ of an object from the ommatidium’s optical
axis determines the amount of response of the corresponding ommatidium. This
response is shown in the angular sensitivity function (ASF).
- Acceptance angle: ∆φ defines the minimum distance of two point
light sources that can be resolved by the optics.
- Modulation transfer function (MTF): The size
of the acceptance angle in relation to ∆Φ determines the modulation for angular
frequencies up to the eye’s Nyquist frequency vs. ∆φ is a measure of the overlap or separation of
the ommatidia’s FOVs.
- Eye parameter: P = Diameter of microlens (D) · ∆Φ. P allows the determination of the creature’s
illumination habitat. Smaller P is found in creatures in environments with
brilliant sunlight that have diffraction limited eyes while a larger P is found
in creatures that are nocturnal or that live in environments that are darker
that have high light gathering power. Functionality of the eye determined by
capturing a sufficient amount of photons to distinguish a certain pattern
through sensitivity.
- Sensitivity: Higher sensitivity enables the
same overall number of captured photons for shorter image capturing times or
lower ambient illumination. The sensitivity allows for compound eyes to have a
similar f-number (F/#, ratio of the lens’ focal length to the diameter of the
entrance pupil/location) to single aperture eyes, allowing a much wider field
of view if even smaller. The Airy disk diameter meeting the receptor diameter
is the best compromise between resolution and sensitivity.
- Scaling of natural compound eyes: The eye
radius is proportional to the square of the required resolution for compound
eyes, while it the relationship is linear for single aperture eyes. Increase
resolution with increase number and size of the ommatidia; eyes become too
large or must show a lower resolution. For humans, a one-meter diameter
compound eye would be needed for the same angular resolution.
- Special properties of compound eyes: Many
invertebrates have areas of locally higher resolution or “acute zones” in their
compound eyes; these zones are in the direction of highest interest. Due to the
very short focal length, the ommatidia provide a huge focal depth. The image
plane of the microlenses is located at the focal plane, independent of the
object distance. Therefore, the angular
resolution is constant over many distances. The spatial resolution scales linearly
with the object distance. Many channels allow for break down of image
processed. Compound eyes are the optimal optical arrangements for large FOV
vision. By pooling receptors of adjacent ommatidia, which have different
amounts of offset from the corresponding optical axis (receptor pooling), an
increased sensitivity is achieved. This improvement in sensitivity is done
without the loss of resolution.
- Superposition
Compound Eye: Much more light sensitive than natural apposition compound eyes.
Found in nocturnal insects and deep-water crustaceans. The light from multiple
facets combines on the surface of the photo receptor layer to form a single
erect image of the object.
- Vision System of
the Jumping Spider: They have single aperture eyes but have eight of them. The
eyes comprise of two high-resolution eyes, two wide angle eyes, and four
additional side eyes. Compound eyes not useable as they would provide too low
resolution for identification.
- State of the Art
of Man-Made Vision Systems: Distributing the image capturing task to an array
of microlenses instead of using a single objective, will provide a large number
of functionalities for technical imaging applications. Miniaturization and
simplification are primary demands.
- Single aperture eye – classical objective:
The image size is given as the product of the pixel size and pixel number, and
required magnification is defined by the focal length. Many limitations arise
to create high resolution and magnification. This method exemplifies the
limitation of system miniaturization of classical single channel objectives.
- Apposition compound eye – autonomous robots,
optical sensors for defense and surveillance, flat bed scanner, integral
imaging, compact cameras: Microlense Array (MLA) used with this eye type. The
number of image points is in general equal to the number of ommatidia. Yet, a
low number of ommatidia are used. MEMS (Micro-Electro-Mechanical-Systems)
technology is used that has a scanning retina for a change of direction of
view. However, high effort for mechanics put in, resulting in poor resolution.
- Superposition compound eye – Gabor
superlens, copying machines, X-ray optics: Ultra wide FOV imaging system based
on refractive superposition used to analyze by ray-tracing simulation.
Applications in highly symmetric one-to-one imaging systems using graded index
MLAs.
- Vision system of the jumping spider –
autonomous robots, compact large-area projection, compact cameras and relay
optics: Transfer of different parts of an overall FOV accomplished by isolating
optical channels. Compact digital imaging system with segmented fields of view
would be used with an array of segments of microlenses, which are decentered
with respect to the channel origin.
- Scaling Laws of Imaging Systems:
Miniaturized imaging systems for resolution or information capacity, sensitivity,
and size are introduced.
- Resolution and Space Bandwidth Product:
First order parameters, diffraction limitation, spherical aberration, focal
depths, and information capacity. For small-scale lens systems, the diffraction
limit itself is the important criteria describing resolution. This limitation
can be avoided by using MLAs and parallel information transfer instead of
single microlenses.
- Sensitivity: Light gathering capability is
very important to imaging systems. Various different factors affect imaging
system changes with scaling. Point source, extended source, and natural field
darkening. The size of the photo sensitive pixels must decrease in order to
maintain resolution during scaling. By decreasing the F/# when miniaturizing an
image system, resolution and flux can be maintained.
- 3x3 Matrices for Paraxial
Representation of MLAs: Combination of different elements of the optical axis
for 2x2 matrix discussed. 3x2 matrix formalism explicitly contains the
misalignments by having a third component with is always unity is added to the
ray vector.
- Fundamentals: Eye based on illumination
environment, required resolution, and time for image acquisition. Main
advantages of compound eyes are the small required volume and the large FOV.
High resolution not suited for compound eyes.
2. Bio-Inspired Optic Flow Sensors for Artificial Compound
Eyes (2014); [chapters 1-2, 26 pages]:
Chapter 1:
- Motivation:
- Artificial compound eyes: Artificial eyes
are bio-inspired that mimic a flying insect’s visual organs and signal
pathways. Wide FOV used with multi-directional sensing. Implementation on robot
platforms especially MAVs. Examples include CurvACE with 180° horizontal and
60° vertical FOVs and the spatial resolution bio-inspired digital camera which
combines elastometric compound optical elements with deformable arrays of
photodetectors. Wide-field optic flow sensing achieved with
pseudo-hemispherical configuration by mounting a number of 2D array optic
sensors on a flexible module found in the 3D semi-hemispherical module
platform.
- MAVs and microsystems sensor capability gap:
MAVs have limited payload, power, and wingspan, limiting eye capabilities. Yet,
small flying insects have similar structural capabilities so compound eyes can
be carried over. An analog neuromorphic optic flow sensor, which is implemented
by mimicking the motion-sensing scheme in an insect’s compound eyes, is he only
sensor to meet the payloads and power constraints.
- Bio-inspired wide field integration (WFI)
navigation methods: Flying insects’ visual pathway is enabled with neurons in
the compound eyes that interpret information about self-flight status from the
wide FOV optic flow. WFI platform mounted on robot extract cues on self-status
in flight.
- Current state of the art optic flow sensing
solutions for MAVs: Integrations of optic flow sensing systems for collision
avoidance and altitude control are demonstrated on some MAVs.
- Pure digital optic flow sensing approach:
This implements computation intensive signal processing algorithms on hardware.
The hardware consists of an image sensor to extract video scenes and a
microcontroller (MCU) or a field programmable gate array (FPGA) to compute the
algorithms. Provides very accurate optic flows but is difficult to put on MAVs.
- Pure analog VLSI optic flow sensors:
Includes neuromorphic circuits that mimic neuro-biological architectures in an
insect’s visual pathways. This includes two purposes; to implement the full
insect’s visual signal processing chain as similar as possible and to have pure
analog optic flow sensors selectively choose the aspects inside the insect’s
eyes to effectively implement on the sensor. Pros include that it meets the power
and payload constraints. Cons include that it is sensitive to temperature and
process variations.
- Analog/digital (A/D) mixed-mode optic flow
sensor: Includes a time-stamp-based optic flow algorithm. Algorithm uses motion
estimation concepts from an insect’s neuromorphic algorithm and then maps the
concepts into circuits. Reduces digital calculation power and minimizes static
power consumption.
- Thesis Outline: Bio-inspired
optic flow sensors are customized for the semi-hemispherical artificial
compound eyes platform.
- Visual information processing in a flying
insect’s eyes: Starts on the anatomy of the insect’s compound eye and its
mechanism of vision-based flying control. Then describes the elementary motion
detector model that is known to estimate optic flows in the compound eyes.
Continues on visual cues utilized in MAVs and their autonomous navigation.
Chapter 2:
- Visual Information
processing inside a flying insect’s eyes: Ommatidia include a lens and a
photoreceptor to independently sense light. Natural compound eyes in insects
have very wide FOVs and are immobile with fixed-focus optics. The eyes infer
distance information from estimating motions (or optic flows). Eyes have a much
higher detectable frequency range than single aperture eyes in humans but with
the cost of resolution.
- Anatomy of an insect’s compound eyes: The
compound eyes consist of three main optic lobes that are each consisted of
three types of neuropils or gangalia, which are a dense network of interwoven
nerve fibers and their branches and synapses; the lamina, medulla, and lobula
complex. Lamina: Right beneath the photoreceptor layer and receives direct
input from them. It provides gain control functionality that insures a quick
adaption to variation in background light intensity. Medulla: Includes very
small cells. Sends the information from the lamina to the lobula complex.
Lobula complex: The convergence of the optic area. Information from
photoreceptors converges onto cells in the lobula plate. These cells are the
tangential cells. Then, the information is transferred to higher brain centers
and to descending neurons to motor centers in the thoracic ganglia.
- Elementary motion detector model: Model
based on the natural design of the specialized processing capabilities of
visual motion in the medulla. Motion detection model named the correlation elementary
motion detector (EMD); it is based on correlation of the signal associated with
one ommatidium with the delayed signal from a neighboring ommatidium. The EMP
uses the spatial connectivity, delay operation, and nonlinear interaction of
the correlator to produce directional motion sensitivity and motion-related
output without computing derivatives.
- Wide-field optic flow analysis: lobular
complex: Image information cannot be directly retrieved at the local level and
optic flow from various regions of the visual field must be combined to infer
behaviorally significant information. Spatial integration takes place after the
medulla in the lobular plate where tangential neurons receive input from large
receptive fields.
- Bio-inspired autonomous navigation: Method
extracts control parameters by integrating wide-field optic flow and by several
matched filters; this is then tuned to be sensitive for optic flow patterns
induced by a robot’s specific movement.
- Summary: In this chapter, the vision processing in an insect’s eye is
described. Three optic lobes, which are lamina, medulla, and lobula, perform
temporal high-pass filtering, motion detection, and wide-field optic flow
integration in each lobe. The integrated optic flows by a matched filter
provide self-motion, depth and speed information for a flying insect’s flight.
A 3 degree-of-freedom (3 DOF) autonomous navigation control scheme, which
utilizes the extracted motion by WFI, is also described.
Object Detection Using Compound Eyes:
1. A new method for moving object detection using variable
resolution bionic compound eyes (2011):
- Introduction: An
image filter with variable resolution and a proposal for using variable
resolution double-level contours is discussed. First, the double-level includes
the contour detection for lowest resolution images. Then, the texture gradient
computation is done that carries out contour segmentation in the
high-resolution image acquired by the single eye. Lastly, a Geodesic Active
Contour model for texture boundary detection is applied so the moving target is
recognized. New method was created to reduce the complexity of the
3-2-3-dimension single camera capture technique. Contour detection and variable
resolution are important characteristics of the compound eye.
- Imaging mechanism
of ommatidium: Compound eyes consist of the large field system and small field
system. The large field system detects global motion while the small field
system detects moving targets accurately.
- Compound eyes multi-resolution
imaging: The facets of the compound eyes have variation in concentrations
depending on location. The ommatidia with low resolution in sparse areas can
detect changes in motion, while only requiring a small amount of data.
Ommatidia with high resolution in dense areas can accurately identify objects.
An elliptical projection of the ommatidium was used to obtain information
within the oval region. Gaussian-weighted
center of gravity used to improve de-noising when gravity is used to extract
the information from the elliptical image center.
- Image filter with
bionic compound eyes: Different distribution densities in the eyes allows for
different types of filters to be used to process the images. Small Gaussian
filters used for sensitive areas and larger Gaussian filters used in the denser
regions. Noise always has an effect on the image. Filtration is thus done on
the several compound eyes projection images with the Gaussian function to then
extract information.
- Description of the
Detection Algorithm with Variable Resolution: Very few bionic compound eyes
have been made. Methods used use algorithms of moving object detection using
variable resolution double-level contours.
- Level 1: Contour Detection: First a
low-resolution remotely sensed image is processed. Image edges (contours) are
detected with Canny method. Then, minimum area matrix is calculated that would
include the contour.
- Level 2: Contour Segmentation with GAC
Texture Gradient Model: Next step includes the action of processing a
high-resolution remotely sensed image with the same surface features. A GAC
gradient flow model based on texture gradient is used.
- Results: Profile
of a ship in water extracted from low-resolution image. Red rectangle
calculated to fit with minimum area on the profile. At first, low-resolution
images only needed for profile detection. Then, algorithm processes
high-resolution images. This allows the algorithm to compute the smaller area
to accurately extract the target. This method provides improved computational
efficiency and data handling.
-----------------
- The concepts of the compound eyes are relatively easy to understand, but the math and actual implementation after the concepts seems complicated. Nevertheless, I see that the use of compound eyes in our drone defense robot will be good as the eyes can be very efficient.
- I have made progress with the compound eyes and the object motion detection. I have yet to complete the OpenCV projects. I will be working on these soon.
-----------------