搜索
您的当前位置:首页正文

Supporting Flexible Autonomy in a Simulation Environment for Intelligent Agent Designs

来源:二三娱乐
Supporting Flexible Autonomy in a Simulation Environment

for Intelligent Agent DesignsJohn Anderson and Mark EvansDepartment of Computer Science

University of Manitoba

Winnipeg, Manitoba, Canada R3T 2N2

andersj@cs.umanitoba.ca evans@cs.umanitoba.ca

Abstract

Intelligent agents designed to perform in the real worldshould, by definition be tested and evaluated in such a world.However, this is impossible in many situations: a lack ofresources may rule out construction of a complete roboticenvironment, for example, or the desired domain may bephysically inaccessible for testing. In such situations, theuse of a simulation system is necessitated. In this paper wedescribe Gensim, a generic simulation system for intelligentagents. Rather than providing a single, parameterizeddomain, Gensim provides a collection of facilities allowingusers to design complete environments for examining andtesting intelligent agents. The system also provides directlow-level support for implementing sets of agents thatdisplay flexible autonomy: the ability to solve problems byvarying the degree of autonomy of single agents in a multi-agent system.

1. Simulation and Flexible Autonomy

“At the intersection of computers, control, information,and management, design for high autonomy requires thetools of AI and Simulation to successfully integratedecision making and physical layers.”

[Zeigler and Rozenblit, 1990]Nearly three years ago, Zeigler and Rozenblit stressed theutility of simulation and artificial intelligence to design forhigh autonomy. Three years later, it would be moreappropriate to stress the strong influence that those workingfrom the perspective of high autonomy have had on the fieldof artificial intelligence. At the time, autonomy (the abilityof an agent or system to function independently) wasrecognized as applicable to AI but also nearly universallylacking in AI systems [Zeigler, 1990]. Today however,consideration of autonomy is appropriately being recognizedas an important component of intelligent systems research.Autonomy is an important consideration in designing single-agent intelligent systems, since one of the basic design goalsof such systems is to minimize the amount of humanintervention required during their operation. However, theconcept of autonomy is much more significant in multi-agentsystems. This is not only because these domains are morerealistic (there are few if any real-world domains in whichagents can perform in complete isolation from one another),

but because the concept of autonomy becomes more complexin domains where agents must interact with one another.In multi-agent domains, agents can never be completelyautonomous from one another: necessary sacrifices inautonomy must be made in order to allow interaction. Wehave found that sacrifices fall along three dimensions: in theamount and type of information that agents exchange withone another; in the physical modifications that must be madeto allow an agent to interact with others; and in the amount ofcontrol an agent has over its own actions and tasks duringproblem solving [Barker et al., 1992]. While completelyautonomous AI systems are clearly desirable in manysituations, a great many domains require agents which makesome sacrifices of autonomy for the purposes of cooperativebehavior. Indeed, the reason humans organize into groups isthat more can be accomplished in unison than could be doneautonomously. Therefore, the design of agents for a multi-agent system requires flexible autonomy, in that agents willsometimes be required to work completely autonomously, butwill often be commanded or influenced by others to a varyingdegree. This flexibility must also be extendible to higher-level organizations: societies of agents may be autonomous,or dependent on other agents or societies.

In previous work, we have presented an architecture forintelligent agents that supports flexible autonomy [Evans etal., 1992]. This architecture is organized around constraint-based representation and control paradigms to capture andmanipulate models of potential activities and the abilities ofagents to contribute to these activities. This architecturemakes use of a constraint-based representation for the variousaspects of an agent’s relationship to others allows us to relaxor constrain these relationships, producing flexibleautonomy. Since an agent’s autonomy is defined largelythrough these constraints, the concept of autonomy itselfbecomes hierarchical: agents can be subservient orcompletely autonomous; likewise, groups of agents may beself-sufficient, or may be dependent upon others. In additionto these high-level concepts, additional low-level supportmust be available for truly flexible autonomy: an assortmentof methodologies for sharing and communicating informationbetween agents, for example.

2. Simulation and Intelligent Systems ResearchAn architecture such as this is designed to handle complexproblems in real-world environments. Intelligent systems

designed for this purpose should, by definition, be tested andevaluated in the real world. There is a growing realization inAI that systems that operate only under simulation often makeunreasonably assumptions that simulated behaviour will scaleappropriately to the real world. In many cases thisassumption is strong enough that simulated world in whichthe system operates bears little resemblance to the real world.Despite the overall suspicion that is currently associated withthe use of simulated environments for intelligent systems,there are many logical reasons for using and in many caseseven preferring simulated environments for testingintelligent agents and specific aspects of intelligence. Asimulator provides a controlled environment for examiningan intelligent agent, for example [Cohen et al., 1990], andcan be used to isolate specific variables that may be difficultto control in the real world. Simulators can also be used toprovide an identical interface between agent designs: in thereal world, differences in sensing and perception may affectperformance greatly, while a simulator can be used to examinean agent's results independently of these facilities. Theenvironment an intelligent system is meant to perform inmay also be inaccessible for testing, difficult to re-create, ordangerous to the system. In such cases, simulation hasobvious advantages. Simulation may also be the onlyalternative in many cases: expensive robotic equipment maybe beyond the means of many laboratories, and even whensuch resources are available, these components are oftenunreliable [Etzioni and Segal, 1992].

While the advantages of simulation to AI research are strong,one clearly cannot advocate that all intelligent agent researchcan or should be performed using simulation testbeds. Thegreatest disadvantage of simulation-based research is well-known: the ability to make simplifying assumptions caneasily lead to making broader, invalid assumptions about theway the world works [Agre, 1988]. In the past, suchassumptions have led to an overall lack of understanding ofthe physical world and how agents operate in it. However,relying on a completely-implemented peripheral system, asmany recent systems attempt to do forces much of the overallresearch effort to be expended on intricacies that are not thereal focus. It also forces the reasoning in such systems tooperate at a low level, and deal with simple environments.Systems that include complete sensory and effectoryapparatus (e.g. [Agre, 1988]) can operate in simple reactivedomains where few if any high-level, long-term decisions arerequired. This is a real difficulty: such models are becomingmore attractive because of their simplicity, and there is agrowing and as yet unjustified assumption that these samemodels will suffice for the implementation of higher-levelbehaviours.

Despite the obvious advantages of simulation systems forintelligent agents, little work has been done in this area untilrecently. While general quantitative and qualitativesimulation systems (e.g. [Kuipers, 1986]) are widelyavailable for reasoning about physical systems, thesesystems are geared toward providing very detailed simulationsfor the purposes of reasoning about physical systems. Thetype of simulation required for examining an intelligent agentdesign, however, is inherently different than that supporteddirectly by such systems. Only in rare cases is the level ofdetail provided by sophisticated quantitative and qualitative

simulation systems required for examining an intelligentagent and the processes that constitute its behaviour: much ofthe reasoning an intelligent agent is involved with incommon-sense domains is high-level, abstract reasoning.Simulators used as a basis for examining intelligent agentdesigns also have a more specific purpose than these generalsimulation systems, in that they provide sensory input foragents inhabiting the simulated environment, and manifestchange caused by those agents.

Our own work in developing single-agent and multi-agentintelligent systems has often required the use of simulateddomains for precisely the above reasons. From own needs andexpectations of simulation systems, we have identified thefollowing requirements for a simulator for examiningintelligent agent designs. Above all, such a system should begeneric: it should be relatively easy to design a simulateddomain for an intelligent agent to inhabit. A simulationsystem designed for these purposes should also be modular.Ideally, the simulation process managing the domain shouldbe a completely separate computational process from anyagent inhabiting that domain. This would result in a trulygeneral simulator in that one could theoretically plug anyagent or domain definition into it. A simulation system forintelligent agents should also provide support for multi-agentenvironments, and agents that consist of parallel processes.In addition to these characteristics, the semantics of asimulator must be clear and explicit. A lack of clear semanticshas traditionally been a problem in simulation systems.Details of communication between an agent and simulator areoften sketchy, as are assumptions made by the simulatorabout the internal operations of agents and other parts of thedomain. Indeed, basic concepts such as what the simulatorconsiders an \"action\" or an \"event\" to be are often left tospeculation. The interface between the agent and theenvironment should also be as simple and low-level aspossible, to reflect the real world (and rely on fewerassumptions about the domain and agents). Finally, a genericsimulator should have the ability to control many aspects ofthe functioning of the world it simulates.

Several testbeds have been developed in recent years thatclaim to be useful for testing and evaluating intelligent agentdesigns. However, after an examination of several of thesesystems (PHOENIX [Cohen et al., 1989]; ARS MAGNA[Engelson and Bertani, 1992]; TILEWORLD [Pollack andRinguette, 1990]; and MICE [Durfee and Montgomery,1989]), all were found to be lacking in one or more of thebasic criteria for simulation systems. Some (e.g. PHOENIX)were clearly meant for a specific domain and would seemrequire major modifications to change the domain more thanslightly, while others (e.g. TILEWORLD, ARS MAGNA) donot claim to be generic, but rather provide a single domainwith a high level of detail, and attempt to provide enoughvariation within that domain to satisfy a wide variety of users.Other systems, while being generic enough to construct manydifferent domains, are oriented toward representing specificaspects of problem-solving. MICE, for example, providesgeneral facilities for constructing domains, but its agentinterface facilities are specifically oriented toward dealingwith coordination problems. Little in the way of semantics isexplained by any of these systems, making it difficult todecide how appropriate the system is to a particular domain or

agent design. MICE is also one of only two systems thatdirectly supports multiple agents. PHOENIX also provides amulti-agent environment, but that environment is specificand the autonomy of the agents it supports is fixed. MICEdoes allow agents to have a greater or lesser degree ofdependence on one another, but has a poor semantics, and isdesigned for a specific type of problem-solving environment.3. Gensim

These facts point to a significant need for a specific type oftestbed for intelligent agent designs: one that is trulygeneric, in that it is suitable for use with a wide range of agentdesigns and domains, yet seriously attempts to address all ofthe other criteria mentioned above. These needs have led tothe development of Gensim, a generic software testbed forintelligent agents.

Rather than simply providing a flexible domain, as is done inArs Magna or Tileworld, Gensim provides flexibility in thesimulation process itself. That is, rather than providingextensive domain parameters, we provide the parameters andfunctions necessary for the user to define their own domainand interfaces for agents. This requires greater initial efforton the part of the user, in order to define or modify a domain,but makes Gensim much more widely applicable than any ofthe simulation systems described above.

SimulatorInterfaceAgentEnvironmentActionsAgentFocusProcessesProceduralDeclarativeKnowledge Perceptualof ActionInformationKnowledge of ActionDomain KnowledgeFigure 1. The high-level structure of GensimA high-level overview of Gensim is illustrated in Figure 1.The simulator itself manages the environment, a collection ofobjects describing the domain. This environment isobjective: it is the universal set from which agents'perspectives are defined. The simulator also possesses acollection of procedural knowledge describing the actionsthat agents can perform on these objects, as well as thephysical events that can occur to these objects outside of theinfluence of any agent.

A collection of agents is also defined, each of which has itsown view of the environment based on its sensing ability andmemory. As per Figure 1, agents exist as modifiable objectsin the domain, as well as entities unto themselves. Thereasoning abilities of an agent are implemented as a set ofcomputational processes. Gensim performs its owntimesharing of those processes. The simulation of agentoperations is done by continuously cycling through eachprocess of each agent. As agent processes run, theycollectively perform actions and make requests for perceptualinformation from the simulator. After each process of each

agent has been executed, the simulator updates theenvironment based on the actions of the agents (and otherindependent events), prepares sensory information for each ofthe agents based on this updated world, and cycles through theagents again.

Agents are viewed as \"black boxes\" by the simulator, in thatGensim neither knows nor cares how the agents arrive at theirdecision for action. As indicated in Figure 1, somecommonality in domain knowledge is required in order foragents to interact with the simulator. For example, theactions that the agent can select from (represented largely indeclarative form) must be identifiable by the simulator, whichthen uses its own (largely procedural) knowledge of action tomodify the environment appropriately. However, emphasisis on making agent's knowledge distinct from that of thesimulator, and the illustration in Figure 1 should not beinterpreted as implying that agents physically share thesimulator's knowledge. Rather, the simulator provides anobjective description of the domain, consisting of all theobjects in the environment (including the agentsthemselves), and causal knowledge of how those objectsinteract with one another. Each agent maintains its own(usually limited) perspectives of this environment, much asthe real world operates.3.1. Actions and Events

The primary purpose of a simulator for intelligent agentdesigns is managing change in a virtual world. In Gensim,this world is modelled using an integrated frame-basedknowledge representation system developed by the authors.Each physical object in the domain is described by a framestructure, and these structures are organized in a hierarchicalfashion. The system also supports message-passing,procedural attachment to frames, a form of multipleinheritance, and the use of multiple frame hierarchies. Theability to manage multiple frame hierarchies is critical in amulti-agent simulator, since the simulator must keep track ofthe objective world and allow agents to maintain their ownmodels of the world if they choose to do so. The use ofmultiple frame hierarchies allows each agent to make use ofthe same knowledge representation system used to manage theobjective environment. Each agent can also make use of anyother knowledge representation system it requires: using theframe system is advantageous in that it allows simplerinteraction with the simulated world, but is not required.Change in the simulated world is managed through twofacilities: agents perform actions that can alter the world; andevents (which may or may not be independent of an agent'sactions) may also occur. An event is the occurrence of changein the domain from an unspecified source. For example, a ballmay hit a wall and bounce off; a toaster may pop; or the windmay knock something off the shelf. Note than none of theseexamples is directly attributable to any agent: for example,the ball may have been thrown by an agent, but from thepoint of view of providing an accurate physics, this no longermatters. An event is defined as a block of environmentalchanges that occur over a simulator interval. Events maypropagate over time into an event series, which allows thesimulator to represent continuous change over time. Thesimulator manages an event queue, each entry of whichconsists of a point in time at which an event is to occur, and

the name of a routine that will cause the desired changes in thedomain to be manifested. This view of change as composed ofsequences of events is similar to that of Lansky [1986].However, Lansky's model is concentrates on an agent's owninternal reasoning about change, rather than the objectiveworld-modelling provided by a simulator.

Events are defined procedurally and are attached to the domainobjects they affect (this implementation will be described ingreater detail presently). A ball, for example, may have aTRAVEL-THROUGH-AIR event defined for it, which movesthe ball through the air in a given direction at a particularspeed. During the length of time the event runs, it may reducethe speed of the ball and check to see if it hits something inthe environment. In either case, the TRAVEL-THROUGH-AIRevent will insert a new event in the queue (to move the ballfurther along the next time the simulator runs, or to make theball stop or bounce if it has hit something). Events, may ofcourse interfere with one another: in the above example, aball might strike another ball and divert it from its path.Once again, this is part of the physics of the domain to besimulated, and routines are provided to dynamically insert,modify, reorder and delete event queue entries to allow the userto implement this physics.

Actions are closely related to events. Unlike events, changeinduced by an action has an explicit source: an action isperformed by an agent with the intent of accomplishing someobjective. Actions in Gensim consist of three components:an intention component, representing the agent's internaljustifications for and expectations of the action; an agent-causal component, consisting of the immediate physicalactions the agent performs in order to bring about it'sintentions; and a domain-causal component, consisting of thechanges that occur later in time due to time delays or physicalinteractions with other objects in the environment. Thus aball being thrown by an agent and breaking a windowconsists of the agent's intention to throw the ball (thereasoning and justification behind its decision), the actualarm motion that causes the ball to be thrown, and the courseof the ball, culminating in its collision with the window. Theagent knows only of its intention and the motions that ititself carries out; knowledge of what happens after the ballleaves the agent's hand is entirely dependent on the agent'sperception (seeing what happened) and/or the accuracy of theagent's knowledge of the physics of the domain.

The simulator is concerned with manifesting the physicaleffects of an action on the environment. Thus, actions withinGensim are represented in two parts: the immediate changesthat would have taken place during the cycle in which theagent performed the action, and a series of future events theaction sets in motion. These are inserted into the event queue.Thus each action can have a series of immediate effects, and acollection of future events that occur. An action can cause anynumber of future events to occur: they are simply insertedinto the event queue. However, each one of these events mustbe independent of one another. For example, an action likeTHROW, when performed by an agent must cause the ball toleave the agent's hand in the current time interval and havesome velocity in a given direction. The implementation ofthe action can then set up a TRAVEL-THROUGH-AIR event, asdescribed above. This event can in turn propagate itself, andthe resulting sequence of events implements the desired

behaviour. An example of such a sequence of events is shownin Figure 2. This example illustrates the event sequence thatmight occur when an agent throws a ball. The ball travels-through-air during a number of intervals, hits something andbounces, and will eventually come to a stop. Only one of thisseries of events will be in the event queue at a time, since eachone spawns the next. This sequence is also non-deterministic: as the future unfolds, existing scheduled eventscan be modified or cancelled by other actions and eventsthrough the use of the event facilities described earlier.

ThrowTravel-through-airTravel-through-airHit• • •Travel-through-airTravel-through-air• • •Figure 2. An example of an event sequenceThe semantics of actions in Gensim is very different fromthat used in previous simulation systems. In the past,simulators have often been described as \"carrying out\" or\"executing\" the actions of an agent: such phrases invite aninappropriate comparison between the agent-simulatorrelationship and the relationship between and classicalplanner and its executor. The real world has almost nocomparison with such an executor, and neither should anysimulator intending to model it. The real world is not aservant that carries out the commands of a disembodied agent.Rather, the agent physically participates in an ongoinginteraction with the rest of the objects the constitute theworld.

In keeping with this view, Gensim does not allow agents to\"instruct\" the simulator in any way; rather, agents commit toand carry out actions during the time-slices in which they areactive. When an agent commits to an action such as \"Throwthe ball\carried out by the simulator, returning some result to theagent. Rather, the agent is viewed to have carried out theagent-causal portion of the action during the time-slice inwhich the agent's decision-making processes were active.The simulator, having been given knowledge that the agenthas performed this action, in turn alters the affected parts ofthe environment during the time interval in which it is active.That is, the simulator changes the environment based on theagent-causal portion of the action, and continues the changesindicated by the domain-causal portion of the action overtime. This distributed view of action, placing control of theagent-causal portion within the agent and the the domain-causal portion in the simulator, emphasizes the dual nature ofaction: an agent will always have some responsibility, withthe physics of the environment providing the rest. Agentsare viewed as active participants in ongoing interaction withthe world, rather than as disembodied decision-makingprocesses.

3.2. Perception

In addition to performing actions, agents also interact withthe simulated world through perception. Perception is aninteresting problem in a simulator such as Gensim, since inthe real world information is not simply given to an agentarbitrarily. Rather, an agent's knowledge (its focus) directsthe agent to explore the world, looking for given pieces ofinformation. This exploration directs the agent to specificinformation (objects) in the environment, which thenmodifies the anticipations the agent has for future sensations[Neisser, 1976]. An agent must be allowed to specify itsinterest in given aspects of the environment, but must also begiven access to information independent of those interests.Perception must exist to confirm the agent's expectations ofthe world, but also must provide the agent with newinformation in order to form the expectations of the worldthat guide its sensory abilities.

Perception in Gensim is defined at the object level: an agentsenses a combination of complete objects and specificsensory aspects of those objects (e.g. a ball, or the fact that itis red or round), rather than examining the individual edgesand features that make up an image of the object itself. AGensim agent makes a sensory request for some particular setof information, and this request is recorded by the simulator.After all agent processes have been run on a particular cycle,the simulator updates the environment according to theactions and events that have occurred during the interval, andthen prepares sensory information based on the agents'request.

Two possible sensory requests are implemented in Gensim(with provisions for implementing additional sensoryabilities). An agent is allowed to explicitly gatherinformation about a given object through an explicitlydefined LOOK request. An agent can also SCAN in a givendirection (recall that directions and locations are part of thedefinition of a domain) to get a general overview of objects inits vicinity. Part of the definition of a Gensim domain is thedefinition of domain-specific aspects of these operations.Like actions, the knowledge required to fulfill sensoryrequests is defined in procedural form as part of a domaindefinition and attached to the simulator's knowledge of theagents themselves. The ability of an agent to look and scancan thus differ from agent to agent. For example, theprocedural implementation of SCAN for one agent may resultin descriptions of a given number of objects of a certain size(i.e. the agent \"notices\" large objects first). A differingimplementation may have the agent see all objects within agiven distance. Each agent has a finite bandwidth, implyingthat they can perceive only a certain number of objects at atime.

Gensim also recognizes that perception often involvesintegrating information from several senses [Neisser, 1976].This is addressed in Gensim through two additionalcomponents to perception. First, an event may cause somenoise or other obvious disturbance in the environment. Thiscan be represented using a SIGNAL command, which is passedto an agent directly at the beginning of its time-slice. Likeother types of perception, an explicit bandwidth may be used.It is also possible when updating the environment based onthe actions of an agent to inform the agent that an error hasoccurred, rather than forcing the agent to visually sense theerror.

3.3. Agent-Simulator Communication

One important aspect of Gensim has been omitted in theprevious section, and that is that perception involvescommunication: the agent must somehow describe theobject(s) it is interested in to the simulator. If an agent picksup a ball, for example, it must somehow inform the simulatorwhich ball the action has been performed with.Communication is one aspect of the relationship between anagent and the real world that can never be completelyapproximated by a simulator, simply because nocommunication takes place when an agent interacts with thereal world. When an agent wants to pick up an object, forexample, it just does so: it doesn't have to communicate thisfact to the world. The question thus becomes one of providingthese communication abilities in a way that is as unobtrusiveas possible to the rest of the architecture of an agent.

(DESC󰀀(equal location (1 1)) 󰀀 (equal color 󰀀 (DESC (equal class ball) 󰀀 (equal location self)))))= \"The object at location 1,1 that is the same color as the ball sitting where I am\"Figure 3. An example of an object descriptorAgents in Gensim may make use of two methods of referringto domain objects for action and perception. The primarymethod is through the use of an object descriptor, whichprovides an indexical-functional or deictic [Agre, 1988]method of referral. An object descriptor is a data structure thatdescribes an object by its relationship to the agent itself orobjects the agent knows about. An example of such astructure is shown in Figure 3. Each object descriptor consistsof a header identifying the structure, and a conjunction ofclauses. Each clause states some condition that a domainobject must satisfy to match the description. When makinguse of such comparisons, descriptions may be recursive: inthe case of the example in Figure 3, we compare the color ofthe object to that of another object referred to from the agent'sperspective (in this case, the ball that shares the samelocation as the agent itself). An agent may make use of such astructure in an action (describing to the simulator which ballit wishes to pick up, for example), or as a means of describinga focus in perceptual requests.

In the interests of computational efficiency, only onerestriction is made on the choice of attributes in an objectdescription. In order to make this initial list of objects towhich the description could apply small, Gensim makes theassumption that one of the clauses will be an attribute underwhich domain objects are indexed. This is by default a class,and thus a clause describing the class of the desired object isrequired. Locations may also be used as an index on objects.As the example in Figure 2 implies, the simulator and agentmust have certain concepts in common in order to be able tocommunicate with one another. In the above examples, bothmust agree on the concept of \"red\a \"weight\" attribute describes. This implementation of object

descriptors also requires that the simulator and agents shareclass names for objects, as well as the same representation oflocations.

In addition to object descriptors, Gensim provides objectreferences, which allow an agent to reference its own internalsymbol for an object when informing the simulator of actionsor sensory requests. When an agent refers to an object via anobject reference, a description is created by the simulator attranslation time (that is, when it comes time to update theworld based on the action that contains the reference, or tofulfill a sensory request on the same basis). This is done byallowing the simulator access to the agent's knowledge base.Once again, this form of communication violates the basictenet of complete separation of agent and simulatorknowledge, and also limits autonomy, but is required forcommunication.

When the simulator encounters one of these descriptors orreferences, it attempts to match them to a domain object.There are three possible results of this matching process: thedescriptor or reference may match no objects, a single object,or many objects. Every sensory descriptor or reference isassumed to refer to a single domain object, so the firstcategory is considered an error (the agent may or may not beinformed of this as the user chooses), and the second categoryis the norm. For the third category, Gensim provides aparameter that allows actions to choose objects randomly orsignal an error upon ambiguous references to multipleobjects. When the object for which an agent is requestingsensory information is deciphered, the simulator returns adescription of the object consisting of all the attributesrecorded for that object that are visually perceivable.4. Defining a Gensim Domain

Gensim provides a rich set of facilities for defining domainsand agents to inhabit them. There are essentially two aspectsto defining any domain in Gensim. First, objects that exist inthe domain must be defined, and the simulator's universalknowledge of how those objects may be manipulated (theobjective physics of the domain) must be described. After anobjective domain is available, each primary agent and itsassociated perspective of the simulator's objective knowledgemust also be defined. Gensim provides definition facilities(functions and macros) to allow users to easily describeaspects of the domain where names and function will vary(e.g. agents, actions, domain objects). Space limitsdiscussing all of them here, but we will focus on two aspectsthat are similar to many others in the Gensim environment:defining agents and the actions they can perform.

Agents are defined through an explicit defagent facility. Thisfacility defines the names of the processes an agent consistsof and allows overriding of the simulation parameters thatcontrol how long each process runs. It creates a framepartition to hold the agent's knowledge (should the agentwish to make use of the built-in knowledge representationmechanisms), and defines a frame in the simulator'sknowledge base to hold information about the physical natureof the agent. The second aspect of agent definition, theinternal knowledge and processing of an agent, is largely leftup to the design of the user.

ClassThrow: remove the object fromPRIMARY-AGENTthe agent's possession list, andset up a travel-through-air eventfor it.InstanceBILLMove-self: update agent orientation to place self at a new location.Figure 4. Actions attached to a particular agentActions that agents can perform are defined using an explicitDEFACTION facility. The defaction macro names an action(e.g. THROW), associates a procedural definition with theaction, and links the action to a particular agent or group ofagents known to the simulator. An abstract view of thisstructure is shown in Figure 4. Agent BILL has two actionswhich it can perform, throwing an object and moving itself.These are done through message passing: the THROW actionin this case sends a throw message to an object, and the objectcan then proceed as dictated by the physics of the world itself.Once again, this is the simulator's knowledge of the agent andwhat it can accomplish. Such actions must be defined withinthe agent's own knowledge base in any manner in order toprovide the agent with a suitable view of its own abilities.In addition to the agents described thus far (which we termprimary agents), Gensim also supports two additional classesof agents. Controlled agents are deterministic agentspresumed operate under some degree of control by a primaryagent. In a kitchen domain, for example, there are objectssuch as toasters and ovens, which are by no meansintelligent, but still have effects on the world (bread tobecome brown, food to be cooked, etc.). Such agents aredefined in a fashion similar to primary agents,with their ownobject-oriented actions and events. Random agents, whichdefine agents that can have random effects on the world, arealso supported.5. Discussion

Gensim is a generic software testbed for intelligent agents. Itis modular, allowing the user to easily substitute agents ormodify aspects of a domain, supports multiple agents andmultiple processes, and has a straightforward object-levelinterface for agents. Perhaps most importantly, however, thesemantics of Gensim are clear and explicit, unlike previoussimulators: semantic aspects of agent timing, agent actionsand perception, and the relationship between the agent andsimulator have all been explicitly described. This allowspotential users to gauge the fit of their domain to Gensimalmost immediately, rather than attempting to implement thedomain and only later learning of semantic details thatcomplicate the implementation.

Since the focus of Gensim is on providing a set of facilitiesfor constructing and simulating any domain, rather than onproviding a single parameterized domain, the system isnecessarily more flexible and complex than previoussimulators. Gensim gives users the power to flexiblyimplement the domain of their choice, and is the only one ofthe simulation systems that can support the use of a singleagent architecture in widely varying domains.

Gensim also provides a foundation for the support of flexibleautonomy in agent architectures. In its role as a testbed forsuch architectures, it attempts to address the integration ofdecision making and physical layers called for in [Zeigler andRozenblit, 1990]. Gensim’s facilities allow variations inautonomy along three lines: knowledge representation,concept sharing, and communication. Gensim’s knowledgedefinition facilities allow agents to directly refer to andreason about the exact objects maintained by the simulator forobjective purposes (severely limiting autonomy); allow theagents to share their own representations of domain objectsbetween one another (moderate autonomy); and also supportcomplete independent world models for each agent (highautonomy). Domains can also be organized so that agentsneed only have a few concepts in common, such as a commonreference for the location of objects (high autonomy), or musthave common definitions of a large number of concepts(limiting autonomy). Finally, several communicationmethods for identifying objects referred to in agents’ actionsor sensory requests are available. Agents can simply specifythe specific symbol or name of an object used by thesimulator when referring to an object in an action or sensoryrequest (low autonomy); can refer to its own name or symbolfor the object, (moderate autonomy); or can specify objects inan indexical-functional or deictic [Agre, 1988] fashion,allowing an object to be described by its relationship to theagent itself or to other objects the agent knows about (highautonomy).

This research has significance to the field of ArtificialIntelligence, in that improved environments for examiningintelligent agents facilitates more direct comparison ofintelligent agent designs. It is also significant to simulationand autonomy, in that Gensim provides adaptable, high-levelsimulation facilities which may in some cases be moreapplicable than more complex qualitative or quantitativesimulation systems. Gensim also provides direct low-levelsupport for agents of varying autonomy in multi-agentsystems. The dimensions of variation in autonomy describedabove allow Gensim to directly support difference in theautonomy of agent perception, models of action, and modelsof the world around the agent. This directly relates to Zeigler’s[1990] three “levels of achievement” for autonomy. We arecurrently using Gensim for the low-level support of flexibleautonomy in the architecture described earlier [Evans et al.,1992], as well as for research on the design of autonomousagents that can deal with complex domains [Anderson, 1993].

References

[Agre, 1988] Agre, Philip E., The Dynamic Structure ofEveryday Life, Ph.D. Dissertation, Department ofElectrical Engineering and Computer Science,Massachusetts Institute of Technology, 1988. 282 pp.[Anderson, 1993] Anderson, John E., Constraint-DirectedImprovisation for Everyday Activities, forthcoming Ph.D.dissertation, Department of Computer Science, Universityof Manitoba.

[Barker et al., 1992] Barker, Ken, Evans, Mark, andAnderson, John, “Quantification of Autonomy in Multi-Agent Systems”, in Proceedings of the AAAI CooperationAmong Heterogeneous Intelligent Systems Workshop,San Jose, CA, July, 1992.[Cohen et al., 1989] Cohen, Paul R., Michael L. Greenberg,David M. Hart, and Adele E. Howe, \"Trial by Fire:Understanding the Design Requirements in ComplexEnvironments\[Durfee and Montgomery, 1989] Durfee, Edmund H., andThomas A. Montgomery, \"MICE: A Flexible Testbed forIntelligent Coordination Experimentshe Ninth DAI Workshop, Eastsound, WA., September,1989, pp. 25-40.[Engelson and Bertani, 1992] Engelson, Sean P., and NiklasBertani, Ars Magna: The Abstract Robot SimulatorManual, Department of Computer Science, YaleUniversity, October 1992. 73 pp.[Etzioni and Segal, 1992] Etzioni, Oren, and Richard Segal,\"Softbots as Testbeds for Machine Learning\of the Machine Learning Workshop at AI/GI/VI '92,University of British Columbia, May, 1992, pp. v1-v8.[Evans et al., 1992] Evans, Mark, John Anderson, and GeoffCrysdale, “Achieving Flexible Autonomy in Multi-AgentSystems using Constraints”, Applied ArtificialIntelligence 6 (1992), pp. 103-126.[Kuipers, 1986] Kuipers, B., \"Qualitative Simulation\Artificial Intelligence 29, 1986, pp. 289-338.[Lansky, 1986] Lansky, Amy, \"A Representation of ParallelActivity\Reasoning about Actions and Plans Timberline, OR,1986, pp. 123-159.[Pollack and Ringuette, 1990] Pollack, Martha E., and MarcRinguette, “Introducing the Tileworld: Experimentallyevaluating Agent Architectures”, in Proceedings of theEighth AAAI, Boston, MA, 1990, pp. 183-189.[Zeigler, 1990] Zeigler, B., “High Autonomy Systems:Concepts and Models”, in [Zeigler and Rozenblit, 1990],pp. 2-7.[Zeigler and Rozenblit, 1990] Zeigler, Bernard, and JerzyRozenblit (Eds.), AI , Simulation and Planning in HighAutonomy Systems (Los Alamitos, CA: IEEE ComputerSociety Press), 1990. 222 pp.

因篇幅问题不能全部显示,请点此查看更多更全内容

Top