Evolving Multimodal Networks for Multitask Games

Item

Title
Evolving Multimodal Networks for Multitask Games
Description
This is an Accepted Manuscript of an article published by IEEE. Schrum, J., & Miikkulainen, R. (2012). Evolving Multimodal Networks for Multitask Games. IEEE Transactions on Computational Intelligence and AI in Games, 4(2), 94–111. https://doi.org/10.1109/TCIAIG.2012.2193399. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.")
Creator
Schrum, Jacob
Miikkulainen, Risto
Date
2016-12-13
Date Available
2016-12-13
Date Issued
2012
Identifier
Schrum, J., & Miikkulainen, R. (2012). Evolving Multimodal Networks for Multitask Games. IEEE Transactions on Computational Intelligence and AI in Games, 4(2), 94–111. https://doi.org/10.1109/TCIAIG.2012.2193399
uri
https://collections.southwestern.edu/s/suscholar/item/229
Abstract
Intelligent opponent behavior makes video games interesting to human players. Evolutionary computation can discover such behavior, however, it is challenging to evolve behavior that consists of multiple separate tasks. This paper evaluates three ways of meeting this challenge via neuroevolution: 1) multinetwork learns separate controllers for each task, which are then combined manually; 2) multitask evolves separate output units for each task, but shares information within the network's hidden layer; and 3) mode mutation evolves new output modes, and includes a way to arbitrate between them. Whereas the fist two methods require that the task division be known, mode mutation does not. Results in Front/Back Ramming and Predator/Prey games show that each of these methods has different strengths. Multinetwork is good in both domains, taking advantage of the clear division between tasks. Multitask performs well in Front/Back Ramming, in which the relative difficulty of the tasks is even, but poorly in Predator/Prey, in which it is lopsided. Interestingly, mode mutation adapts to this asymmetry and performs well in Predator/Prey. This result demonstrates how a human-specified task division is not always the best. Altogether the results suggest how human knowledge and learning can be combined most effectively to evolve multimodal behavior.
Language
English
Publisher
IEEE
Subject
Computer games
Evolutionary computation
Approximation methods
Multimodal networks
Multitask games
Multitask Learning
Mode Mutation
Type
Article