Symbiotic Computing Laboratory
Current Projects

Symbiotic Computing Home

Laboratory Personnel

Current Projects

Publications

Courses

Laboratory Resources

Sponsors

Laboratory News


Press Coverage

Student Research Opportunities

Research Experiences for Undergraduates: Integrated Machine Learning Systems

Robot Assistants for Promoting Crawling and Walking in Children at Risk of Cerebral Palsy

Andrew H. Fagg, Lei Ding, Thubi H.A. Kolobe, David P. Miller

More Information

Spatiotemporal Multidimensional Relational Learning

Mathew Bodenhamer, Thomas Palmer, Andrew H. Fagg, Amy McGovern

More Information

Mobile Manipulation

Di Wang, Joshua Southerland, Charles de Granville, Andrew H. Fagg

Brain-Machine Interfaces

David Goldberg, Andrew Hill, Andrew H. Fagg, Lee Miller (NWU), Nicho Hatsopoulos (U Chicago), Greg Ojakangas (Drury U)

Modern prosthetic arm/hand systems suffer from a variety of challenges, including: bandwidth limitations in communication from human to device, difficulties in controlling (and in learning to control) the device, and the reliance on visual feedback to guide the movements. We are developing signal processing and machine learning techniques that will enable the use of motor cortical activation patterns to ultimately command a robotic prosthetic device. This process involves the construction of computer models of the transformation from cell activity to arm motion.
In particular, we are examining:
  • the incorporation of arm, muscle, and spinal cord dynamics directly into the modeling process,
  • the best ways to describe commanded movements,
  • the types of information that are relevant to computing movement commands (including limb state information), and
  • the construction of robust and general models.

Robot Learning from Demonstration

Joshua Southerland, Charles de Granville, Andrew H. Fagg, John Sweeney (UMass), Michael Rosenstein (UMass), Roderic Grupen (UMass)

We are developing techniques that allow robots to recognize the manipulation actions of others. These techniques enable robots to automatically learn new skills through demonstration by human teachers, and make communication between robots and humans in collaborative tasks easier.

The current work is focused on pick-and-place tasks, in which the robot successively grasps objects and places them in a particular position, thus forming an assembly of objects. The human "teacher" wears a P5 dataglove equipped with a Polhemus sensor; these sensors allow the robot to track the hand and finger motion of the human. The challenge is how to interpret the human's movements in terms of individual pick-and-place movements.

Key to our approach is to make use of the robot's own reach controllers to interpret the actions of the human teacher. The robot starts by constructing a representation of all of the objects and their locations within the environment. For each object, the robot "imagines" how it would move to grasp it. This imagined movement is compared against the actual movement made by the teacher. When the human's movements match one of the hypothesized movements well, this hypothesis is considered to be the explanation of the observed movements.

Learning Grasp Affordances

Charles de Granville, Joshua Southerland, Andrew H. Fagg

In order for a robot to grasp an object, it must determine an appropriate position and orientation for its hand. Common approaches to this problem include mapping objects to grasps given a set of predefined heuristics. We are developing a system that automatically learns this same mapping by observing humans grasp the objects. One of the challenges is how to transform a large number of (often redundant) observations into a small number of grasp possibilities. This compact representation of how to interact with a particular object is important in subsequent stages as we require our robots to plan and to learn in novel environments.

The immediate focus of this project is how to group the observed set of hand orientations into a small number of canonical orientations. Observed hand approach orientations (in 3D) are represented as unit quaternions (points on a 4D hypersphere). We cluster these points using a mixture distribution-based clustering method. Individual clusters of orientations are represented using probability density functions that have Gaussian-like shapes. Experimental results demonstrate the feasibility of extracting a compact set of canonical grasps from the human demonstration. These extracted grasps can then be used to parameterize controllers that are capable of driving a hand to an appropriate pose for grasping, or interpreting the actions of other agents in the environment.

Redundant Array of Inexpensive Digits

Brian Watson, Di Wang, Andrew H. Fagg

Commercially available robot hands (in particular, those equipped with sophisticated sensors) are expensive to purchase and maintain. Our lab is exploring the possibility of constructing hands from inexpensive components. Although this approach limits the capabilities of the individual fingers that make up a hand, it allows one to achieve capability through the redundancy that is possible with a large number of fingers.
One key component of this approach is that of accurately sensing the contact location between a finger of a robot hand and an object. Our approach is to embed a six-axis force/torque sensor within the finger. Given the sensed forces and torques, and knowledge of the finger geometry, one can infer the location of a contact. However, sensing is limited to contacts that are distal from the sensor. We have been developing a hand testbed in which we move the force/torque sensor from the finger tip (the typical configuration) to the base of the finger. This approach allows for the sensing of contacts across the entire surface of the finger, but dramatically increases the complexity of the sensor interpretation problem. Problems that we are addressing include:

  • distinguishing between contact and gravitational forces (in a configuration-dependent manner),
  • separating the real from "ghost" contacts that arise from the redundant solution problem, and
  • dealing with multiple, simultaneous contacts between finger and object.

Bion: A Sensor Network Approach to Interactive Art

Adam Brown, Andrew H. Fagg, Andrew Snyder, Brent Goddard, Charles de Granville

A sensor networks is a large collection of sensory (and actuation) devices that can be distributed in an ad hoc fashion throughout an environment. Through a wireless (and often local) connection, the individual sensor nodes coordinate their activities and communicate critical pieces of information to more global repositories.

Bion is an experiment in creating interactive art using a large, distributed network of sensor nodes. Individual nodes are equipped with a microcontroller, four infrared transceivers, a piezoelectric speaker, and a set of visible-light light emitting diodes (LEDs). The infrared transceivers are used not only to communicate with neighboring nodes, but to also sense the presence of visitors to the installation. Large-scale coordination of the network's response to the visitors is accomplished through a probabilistic re-broadcasting communication model.

Hybrid Supervised/Reinforcement Learning Models of Motor Program Acquisition

Andrew H. Fagg, Andrew G. Barto (UMass)

Work in psychophysics and neuroscience tells us that multiple, distinct mechanisms of learning are involved in the process of acquiring a new skill. We are interested in studying models of these learning mechanisms and their interaction.

We are currently developing an abstract model in which a reinforcement learning (RL) module is responsible for selecting from a small number of available corrective actions, but the meaning of these actions is altered at the same time by a supervised learning (SL) mechanism. This SL mechanism interprets the current movement (e.g., a reach) as a correction of the previous movement. Thus, it provides an estimate of the previous movement's error. This model is particularly interesting in that it uses exploratory learning when there is little information about how to perform a movement, but then comes to rely on supervisory training information when the corrective movement teacher becomes competent.

Fractional Power Damping Model of Spinal/Muscle Interaction

Andrew H. Fagg, Andrew G. Barto (UMass), Jim Houk (NWU)

One of the critical questions to be addressed when examining the role of the brain in motor control is the relative contribution of peripheral systems, specifically, muscles, the sensors embedded within the muscles and other tissue, and the neural circuitry within the spinal cord. It is common in the modeling community to assume that these peripheral systems impose a linear transformation of the motor signals generated by the brain. Although a simple assumption, it implies that the full complexity of a temporal muscle activation pattern is due to the motor commands generated by the central nervous system itself. In addition, this observed behavior requires a large number of parameters to describe.

We have developed a model of muscle/spinal interaction that includes key nonlinearities, particularly within the feedback loop implemented by the spinal circuitry (Houk, Fagg, and Barto; 1999). Although these nonlinearities impose additional complexities to the modeling process, we have shown that they can drastically reduce the complexity of the motor command that is necessary to produce realistic muscle activation patterns (Fagg, Barto, and Houk; 1998, 2002). This observation has important implications for how the brain represents and learns motor skills.

Force & Torque Based Grasping Controller

Di Wang, Brian Watson, Andrew H. Fagg

The F/T grasp controller is a series of control algorithms used to control the "Redundant Array of Inexpensive Digits". We have developed a simulation environment using VTK. In this environment, the contact force direction is simulated using the normal of the contact point. The controller program can either be used to control the real finger using a server- client manner or just to do simulated grasp by itself.
In this research, we mainly focused on the following problems:

  • finding contact location using data from F/T sensor and eliminating "ghost" contacts,
  • reducing F/T residual by using some clean rules directly derived in Cartesian space, and
  • dealing with both convex and concave objects using multiple fingers.


fagg [[at]] ou.edu

Last modified: Tue Oct 30 00:13:04 2012