Starting from today, the article series on Artificial Intelligence (AI) would be focusing on ‘Intelligent agents’ and search techniques which are two of the very useful and interesting sub areas in AI. Let us talk about intelligent agents, first. First of all, we should understand why agents are important in AI based systems. The term ‘agent’ is usually used to define a person which undertakes some tasks or actions in order to fulfill certain objectives. For example, in day to day life, a ‘real estate agent’ is a person who acts as an intermediary between sellers and buyers of real estate to perform the real estate based transactions. Further, you would have heard about ‘travel agent’ who takes care of the booking, handling and payment management of travelling with regard to passengers, who may be linked to a travel agency or may be working as a freelance agent. There are many more agents in real world who perform certain actions with respective their field of work to obtain certain objectives. All of these real world agents have paved way in defining an entity called ‘Intelligent agents’ in the subject of AI. With regard to AI, we are trying to build computer systems which can exhibit intelligent or rational behavior to perform certain tasks which as humans we need some assistance with. You can refer the first two articles on AI which discusses about the terms of rationality, intelligence and other basic terminology to refresh your memory, if required.
In AI, an ‘Intelligent agent’ is considered as an entity that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators. With respect to a human being the sensors are eyes, ears and other sensory organs while the actuators can be considered as hands, legs, mouth and other body parts from which we perform a certain task. The environment is the situation or the area in which the agent acts. For example, the environment of a travel agent is the travel agency where information about travelling and information about passengers are available. Therefore, we can see that the function or the expectations from an AI based intelligent agent is closely related to what an agent does in the real world.
Intelligent Agents in detail
In more detailed terms, an ‘Agent’ is an entity which can perform the responsibilities and actions to achieve a certain goal acting upon the percepts received from the sensors about the environment, on behalf of another while learning from experiences with a less amount of human intervention
In AI there are certain terminologies one has to learn when dealing with agents which are described as below.
- Environment – The environment in which the agent is operating in
- Sensors – The ways in which the agent can grasp input and observe/feel the changes happening in the environment
- Percepts – The input that an agent receives from its different sensors about the environment
- Actuators – The medium in which the agent performs the required actions based on the given percepts. It can be a movement action where the actuators have to be wheels, legs (if it is a humanoid robot) or the actuators can be the voice where a robot or computer system provides feedback in voice enabled fashion.
- Actions – The particular actions which the agent does in order to fulfill the given goal with the use of its actuators
- Performance Measure – A measurement to find the successfulness of the agent’s actions based on how much it acts towards fulfilling the specified goal. In more scientific terms it is considered as an objective criterion for the success of an agent’s behavior. For example, for a computer based (AI-based) travel agent, the performance measure could be to maximize the profit generated from the sale of travel and booking while meeting the customer requirements up to a certain high level.
If we illustrate the basic functionalities and parts of an intelligent agent in a high-level picture, it would be as follows.
Figure 1 – An agent architecture – A high level view
As we consider rationality (doing the right thing) is the most important aspect in terms of creating intelligence (refer AI articles 1&2 for more details on this), with respect to agents we consider building rational agents.
Generally, ‘Rational agents’ should strive to ‘do the right thing’ based on what it can perceive and the actions it can perform. The right action is the one that will cause the agent to be most successful. In more specific terms, for each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.
Further point is that, agents can perform actions in order to modify future percepts so as to obtain useful information. For example, an agent can perform a certain intermediate action and get the feedback about that from the percepts and see whether it was a good action or not. So based on the percepts it received on that particular action the agent can decide whether it is a good action to perform in the future or not, in order to reach the specified goal. An agent is considered as ‘autonomous’ if its behavior is determined by its own experience. The way in which experience can help is that the agent would be able to learn and adapt based on the past experiences and act accordingly to future percepts.
When designing agents in AI, we have to define a structure called ‘PEAS’ where each letter stands for a different concept linked to the design of the agent.
Figure 2 – PEAS definitions
If you are to design an agent then the first thing you should do is to define its PEAS structure. This would enable you to design the agent to meet its goal while it would make the implementation of the agent for focused. So, let’s look at some examples of how to define PEAS structure for different types on agents. After going through the following basic examples, you can also try out on defining some PEAS structures for agents for whom you think are essential in today’s world.
Example 1 – Interactive Mathematics tutor agent
- Performance measure: Maximize students’ scores on a specific test
- Environment – Set of students, Mathematics exams
- Actuators – Screen display showing the exercises, examples, corrections, scores, advice and tips; also the actuators can also include voice in the case that the agent is voice enabled so that it provides feedback to students via voice.
- Sensors – Keyboard, voice(if the agent is going to be working on voice commands)
Example 2 – Robot Soccer Player
- Performance Measure – No: of goals scored, No: of successful defends
- Environment – Football ground, other players who are from your same team and also the opponents, spectators, referees, goalkeepers
- Actuators – Arms and legs
- Sensors – Visual Sensors, Auditory Sensors, Identification of colors and team player differences
Example 3 – Internet book shopping agent
- Performance Measure – Ordering the required book(s) for minimum price, minimizing cost
- Environment – Book selling web sites, other shopping agents, clients, financial institutions for payments
- Actuators – Screen to Display ordered books, prices, place ordered, estimated time of delivery
- Sensors – Keyboard input
Types of Environments
With relation to AI agents, we know that understanding the environment in which they operate in, is important when designing and building an agent. Based on the different types of environments, the percepts that the agent is going to receive would vary while the actions it should perform also depend on the environment of operation. Therefore, let’s identify the different environments in which an AI based agent can operate in.
Table 1 – Types of environments
The following section describes in detail what each of the above environment types mean
- Fully observable – An agent’s sensors give it access to the complete state of the environment at each point in time
- Partially Observable – An agent does not have access to the complete state of the environment through its sensors so has to act upon partial information
- Deterministic – The next state of the environment is completely determined by the current state and the action executed by the agent
- Stochastic – The next state of the environment is not determined by the current state and action of the agent so, it is completely detached
- Episodic – The agent’s experience is divided into atomic “episodes” (each episode consists of the agent perceiving and then performing a single action), and the choice of action in each episode depends only on the episode itself
- Sequential – The choice of the next action depends on the previous actions
- Static – The environment does not change while an agent is performing its task
- Dynamic – The environment changes while the agent performs its tasks, so that the action may not result in the best action once the environment in changed
- Discrete – A limited number of distinct clearly defined percepts and actions derived and returned to the environment
- Continuous – There can be a continual flow of percepts via the sensors on which the agent has to act accordingly with a series of actions
- Single agent – An agent operating by itself in an environment
- Multi agent – Multiple agents operate in the same environment which may lead to inter-agent communication
The environment type largely determines the agent design because based on where it operates in the design of the sensors, actuators and also the percept receiving mechanism, action selection and successful operation varies. For example, an agent who operates in a fully observable environment, where it can observe all the changes occurring in the environment needs to have high number of very effective sensors to capture all those changes when compared to the partially observable environment where only one or two percepts would be required. Another example can be in a single agent environment, the agent only has to interact with the user where as in the multi-agent environment it should have the ability to communicate and exchange information with the other agents via more actuators and actions. Now let’s see in which types of such environments do the agents we described as examples in the previous section operate in.
Table 2 – Environment Types applicable to example agents
As an exercise, try to reason out why the different environment types were defined for the above 3 example agents. Think why one is a single agent environment while the other two are considered as multi agent environments. Likewise, try to convince yourselves the reasons about why those different environments are applicable to those agents. I would be giving an explanation on why it is so, in the next article so that you would have time to put on your thinking caps and think about it before I give you the reasons now itself.
Something to look forward to
As I already mentioned, in the next article I would give reasons for selecting the specific environment types for the given agents. Then we would be moving on to understanding agent functions, the design agent programs and also about different types of agents. All these to come would be based on the details and terminology that was presented today about intelligent agent and their environments, so try to get a feeling about agents by reading this article. I hope you would be looking forward to read more about agents next time.
Artificial Intelligence – A modern Approach, Second edition, Stuart Russell&Peter Norvig