|
-
Population size and diversity in genetic algorithms (Hougen). While evolutionary
computation methods are capable of providing outstanding results on
real-world problems (as illustrated by the Human-Competitive
Results awards of the Genetic and Evolutionary Computational
Conference), there is still much research to be done before these methods can be used in integrated
applications. Evolutionary computation methods typically require users to set a large number of
parameters and the behavior of the system may still be influenced by other factors (for example,
resilience, Thomason and Soule, 2006). One factor that has often been credited with differences in
system performance is diversity of the initial population. Recent research suggests that population
size is a much more important factor than initial population diversity (Diaz-Gomez, forthcoming)
but the full answer is not yet known. We investigate the effects of initial population diversity
and diversity retention in a variety of real-world applications.
-
Maximizing Fitness while Minimizing Trial Time.
Evolutionary computation methods such as genetic algorithms and
genetic programming proceed by alternating between generating
candidate solutions and evaluating their fitness. When using these
methods for learning control of physical systems, evaluating fitness
often consists of running the system through tests of its abilities,
known as trials. The accepted wisdom seems to be that one
should run several trials per candidate solution (Moriarty, Schultz &
Grefenstette, 1999).
Unfortunately, this is an expensive undertaking---whether learning
using actual hardware or high-fidelity simulations, trial time will
almost surely be the limiting factor in learning. Fortunately, we
have found reason to question this accepted wisdom. In a surprising
result we have found that, for one benchmark problem at least,
minimizing the number of trials per fitness evaluation resulted in
faster learning (that is, a faster increase in fitness per trial run)
(Hougen, Carmer & Woehrer, 2003). This project explores the relationship between
trials per evaluation and fitness increase over a range of additional
benchmark problems and testing conditions.
-
Memetic Learning in Real Robots. Memetic learning is a novel
machine-learning method that combines components from group-level
learning methods (including various forms of evolutionary computation)
and individual-level learning methods (such as reinforcement learning).
Memetic learning is based on the way humans learn through imitation of
successful behavior they observe. Imitation should allow systems to
learn more quickly than they would if they were restricted to learning
based solely on their own experiences. This is because they have the
experiences of others to guide their own learning. This
is particularly important in learning from data in real physical
systems, such as robots, because experiences are so expensive to
collect in these systems. In simulations of control problems memetic
learning has proven quite effective at learning with fewer experiences
than similar evolutionary computation methods (Hougen, Carmer &
Woehrer, 2003; Eskridge & Hougen, 2003ab).
fagg [[at]] ou.edu
Last modified: Wed Feb 20 00:03:48 2008
|