Chathra Hendahewa is a graduate student at Rutgers –The State University of New Jersey since fall 2008 and currently in her second year of studies. She is studying towards her Doctorate in Computer Science and her major research interests are Pattern Classification with application to financial systems, Machine Learning and Cognitive Sciences. She was the first ever International Fulbright Science &Technology awardscholar from Sri Lanka. Chathra graduated from Faculty of Information Technology, University of Moratuwa in year 2007. She is also a passed finalist and a multiple gold medalist of CIMA (Chartered Institute of Management Accountants –UK). She has professional experience in working as a Business Systems Analyst for more than a year at IFS R&D and has completed an internship period in developing modules for‘Sahana-Disaster Management System’. She loves reading biographies, mysteries and science fiction and listening to music during her free time.

Types of Intelligent Agents

11/25/2009 3:28 am By Chathra Hendahewa | Articles: 11

Last month, we looked at how to link different environments to agents, agent functions and agent programs. Now, moving forward in the area of intelligent agents let’s understand what are the different agent types and how they differ from each other type in today’s article. There are mainly five different types of agents which we can group most of the intelligent agents found nowadays. We would start off with the simplest type and go on to define the most sophisticated agent type which is ultimately the most desired type of agents in the context of the real life.


1.Simple reflex agents

This the simplest type of agent architecture possible. The underlying concept is very simple and lacks much intelligence. For each condition that the agent can observe from its sensors based on the changes in the environment in which it is operating in, there is a specific action(s) defined by an agent program. So, for each observation that it receives from its sensors, it checks the condition-action rules and find the appropriate condition and then performs the relevant action defined in that rule using its actuators. This can only be useful in the cases that the environment in fully observable and the agent program contains all the condition-action rules possible for each observance, which is somewhat not possible in real world scenarios and only limited to toy simulation based problems. Figure 1depicts the underlying concept in such a simple reflex type agent.

Figure 1 – Simple reflex agent

In the above case, the agent program contains a look-up table which has been constructed prior to the agent being functional in the specific environment. The look-up table should consist of all possible percept sequence mappings to respective actions. (In the case you require to refresh your memory about the concept of look-up table, please refer the last month’s article where there is a last section describes it in detail). Thus based on the input that the agent receives via the sensors (about the current state of the environment), the agent would access this look-up table and retrieve the respective action mapping for that percept sequence and inform its actuators to perform that action. This process is not very effective in a scenario where the environment is constantly changing while the agent is taking the action because, the agent is acting on a percept sequence that it acquired previously to the rapid change in the environment and therefore the performed action might not suit the environment’s state after the change.


2.Model based reflex agents

This is a more improved version of the first type of agents with the capability of performing an action based on how the environment evolves or changes from the current state. As in all agent types, model based reflex agents also acquire the percepts about the environment through its sensors. These percepts would provide the agent with the understanding of what the environment is like now at that moment with some limited facts based on its sensors. Then the agent would update the internal state of the percept history and thus would yield some unobserved facts about the current state of the environment. To update the internal state, information should exist about how the world (environment) evolves independently of the agent’s actions and information about how the agent’s actions eventually affect the environment. This idea about incorporating the knowledge of evolvement of the environment is known as a model of the world. This explains how the name Model based was used for this agent type.

Figure 2 –Model based reflex agent

The above diagram shows the architecture of a Model based reflex agent. Once the current percept is received by the agent through its sensors, the previous internal state stored in its internal state section in connection with the new percepts determines the revised description about the current state. Therefore, the agent function updates its internal state every time it receives a new percept. Then based on the new updated percept sequence based on the look-up table’s matching with that entry would determine what action needs to be performed and inform the actuators to do so.


3.Goal based agents

This agent is designed so that it can perform actions in order to reach a certain goal. In any agent the main criteria is to achieve a certain objective function which can in layman’s terms referred to as a goal. Therefore, in this agent type goal information is defined so that is can determine which suitable action or actions should be performed out of the available set of actions in order to reach the goal effectively. For example, if we are designing an automated taxi driver, the destination of the passenger (which would be fed in as a goal to reach) would provide him with more useful insight to select the roads to reach that destination. Here the difference with the first two types of agents is that it does not have hard wired condition-action rule set thus the actions are purely based on the internal state and the goals defined. This sometimes might lead to less effectiveness, when the agent does not explicitly know what to do at a certain time but it is more flexible since based on the knowledge it gathers about the state changes in the environment, it can modify the actions to reach the goal.

Figure 3 –Goal based agent


4.Utility based agents

Goals by themselves would not be adequate to provide effective behavior in the agents. If we consider the automated taxi driver agent, then just reaching the destination would not be enough, but passengers would require additional features such as safety, reaching the destination in less time, cost effectiveness and so on. So in order to combine goals with the above features desired, a concept called utility function is used. So based on the comparison between different states of the world, a utility value is assigned to each state and the utility function would map a state (or a sequence of states) to a numeric representation of satisfaction. So, the ultimate objective of this type of agent would be to maximize the utility value derived from the state of the world. The following diagram depicts the architecture of a Utility based agent.

Figure 4 –Utility based agent


5.Learning agents

In the study of AI, we are mostly interested in finding ways of mimicking human behavior or intelligent in agents. Learning is one of the major areas where human intelligence is based upon. Therefore, it is important to see how we can incorporate learning in the intelligent agents to achieve a certain task more effectively. This type of learning agents has four conceptual components built in. One is the learning element which is the component responsible for making improvements to the existing knowledge. The second component is the critic which gives the feedback to the learning element based on the defined performance standard. First the agent’s sensors input the precept sequence received from its sensors to the critic, which would provide some feedback based on the received percept sequence and the performance standard required to achieve. The third element is the performance element which is responsible for handling and determining the external actions to be performed by the actuators of the agent. This part interacts with the learning element to and ultimately determines what action to perform based on the evolved knowledge. The other component is called the problem generator whose job is to propose actions that would lead to new and informative experiences. This helps the agent to explore beyond the usual horizon and try out new actions which would let it learn more knowledge. In the long run, the learning components which act together and interact would be able to know what actions are better and what actions are worse based on the environment and eventually would lead to improves overall performance. The typical structure of a learning agent is shown in figure 5.

Figure 5 –Learning agent


Wrapping up

In today’s article we discussed about the various types of agents that can be built to achieve a certain task. Starting off with simple reflex agents we went in to more complex ideas and complex structures in learning agents. I hope you got some idea about the different types of agents that can be built and it would be worthy to know that it is not required always to construct the most complex structure. In some simple domains, the less complex agent types would be able to perform well, although in real life scenarios the emphasis should be with learning agents. In the next article we would talk a little bit more about agents are then move on to problem solving through searching which is one of the key topics in AI problems. Hope you all enjoyed reading the article series on AI and have developed an interest in this field throughout the year of 2009. We hope you would be looking forward to the continuation of this article series in the coming year as well. Best wishes for a Merry Christmas and Happy New Year!!!!



Artificial Intelligence – A modern Approach, Second edition, Stuart Russell & Peter Norvig

Previous Article

No votes yet