Home Classroom Artificial Intelligence Artificial Intelligence – Part 9

Artificial Intelligence – Part 9

Moving ahead with Intelligent Agents

In the last article, we discussed about the basics of intelligent agents, rational agents, PEAS structure and about different environment types in which agents can work on. So moving ahead on the same topic on intelligent agents, we would try to understand how to choose an environment for specific agent type, what is meant by an agent function, designing agent programs and also about different types of agents. As I mentioned last time it is quite important that you understand the fundamentals of intelligent agents which was presented in the last month’s article in order to grasp what we talk today. So in case you missed the reading the previous article on intelligent agents, I strongly recommend you to take some time and do read it from diGIT’s last month archive before going through today’s article.

 

Linking environment types to different agents

Last time I gave a small exercise for you at the end of the article to reason on why the following agents are linked to such environment types. Hope you all had a look in to it. Today I would explain the reasoning behind each of it, in detail.

 

1.  Let’s first take example 1 which is an ‘Interactive Mathematics Tutor’.

Partially Observable - In this case the agent is required to tutor the students who are using that computer based application on solving Mathematics problems. This agent can be implemented so that it gives a set of problems for the users to solve and then once the users input the answers it can compare them with the answer base the system has in its database and mark them as correct or incorrect and in the case of incorrect answers it can provide an explanation to why it was wrong. Further, it should have the capability of users inputting certain problems that they have with respect to Mathematics so that the agent can give advice on how to solve those. Therefore, we can see that based on the different user inputs and questions which cannot be predicted beforehand the environment is not fully observable but only partially observable.

Stochastic – Each input and response to the agent is not a continuous process where the next action is determined by the previous action. Each action such as providing advice to a question a user raised, grading a test done by a user, providing explanations to incorrect answers are not dependent on one another but are different detached actions. Thus it is a stochastic environment.

Episodic – The term ‘episodic’ refers to each agent action being belonging to a certain action rather than a continuous process. For example, for each question that the user asks the agent gives the answer or advice without depending on the previous tasks.

Static – In this case we can assume that the environment which the agent is operating in does not change while it is taking an action. For example, when the user has submitted the answers to the questions that the agent has given on Mathematics, then the agent would be correcting them and in the mean time, the user would not change the answers. Static environments are easy to implement.

Discrete – This agent is working on a finite number of distinct states such as answering questions, correcting answers, etc. It is not a continuous process which is like a time-critical application. Thus operates in a discrete environment.

Single agent – This Mathematics tutor is operating on its own and does not communicate with any other intelligent agents other than the users themselves who are humans. We can assume that this agent is operating in a single stand alone computer other than a networked system and that all the actions are performed on its own (single agent). There can be a variation to this agent where it is linked to a network of tutoring agents and each providing specific actions which would then yield a multi-agent environment which is beyond the scope in this case.

2.Now let’s consider the 2nd example which is a ‘Robot Soccer Player’.

Partially Observable – The robot soccer player can be implemented with certain specific actions such as kick, run, tackle, etc which are related to a soccer playing actions according to the percepts it receives. But we cannot predict for sure what the environment would be in advance because the each game of soccer would be different from each other based on the many different factors, some of which being the overall players’ playing strategies, players’ ability levels, ground status. Every soccer game is different and thus the environment can only be partially observable in this case.

Stochastic – The next state of the game is not completely determined by the current state and the action executed by the robot because there can be other factors such as movements in other players which makes the next state dependent on many factors yielding this to be a stochastic environment.

Sequential – The process of playing a soccer game is sequential since one action can lead to long term consequences. For example if the soccer playing robot scored a goal in the early stage of the game and if there were no scoring by the opponents then the soccer robot’s team would win the match showing how each action causes a sequential process.

Dynamic – The soccer playing environment is changing every moment. Based on each action of each player, positions of players, position of the ball, thinking and actions of other players at each moment, will change the environment in each single second, which can change dramatically while our robot soccer agent is taking a certain action. Thus it is a dynamic environment.

Continuous – The soccer game is a continuous process from start to the end of the game.

Multi agent – The robot soccer played would be playing with his fellow robot team members as well as with the opponents. This is indeed a multi agent environment where the robot soccer agent has to communicate with the fellow robots are also have the ability to recognize the opponent team’s robots and try to win the game for its own side.

 

3. The third example in the list was ‘Internet book shopping agent’. So let’s see how its environment behaves in detail.

Partially Observable – The internet book shopping agent would be a tool that the users can use in order to find the most affordable books on preference and ask it is order them on behalf of you to reach you by the required date of your liking. When we consider the agent it has to search through the online book sellers and other online merchant sites for the books, compare prices, check for shipping options and meet the budget given by the user in a nutshell. But there can be certain sites which do not let the autonomous bots (automated shopping agents) to order items from their sites by restricting access and also including many security options, also this agent does not have the ability to access the book selling that can go offline and also the agent can have missing and obsolete data derived from online sites. Thus, it is a partially observable environment.

Deterministic – The next state of the environment is purely determined by the previous state of the environment, because after comparing the prices and other factors of the respective book the agent would take the next action of ordering the book from the online site which optimizes the user’s requirements. Thus it can be seen that the next action is purely based on what it perceived in the previous states.

Sequential – The online shopping is done on a step-by-step process. Starting off with finding the book(s) with the correct specifications such as the author, name, edition, then it would compare the prices and other factors and then decide on ordering from the best site and then order the book(s). It can be clearly seen that this is a sequential environment.

Dynamic – It is also worthy of noticing that the prices of the books, the availability of the stocks, etc could change in each online book selling site, at the same time where the online shopping agent is taking the decision of from where to order based on the information gathered in the previous state. Thus, the online agent cannot just rely on the information it gathered at a past time, but has to check on each instance to check on changes occurring in the online sites with respective to the book or books of concern. Thus it is obvious that it is a dynamic environment.

Discrete – This is a discrete state environment with specific states starting from the user specifications for buying a book to ordering a book by the agent to meet the user’s requirements.

Multi agent – The online shopping agent might have to communicate with other online shopping agents which are roaming over the internet to gather information about the books and their features rather than visiting each online site on its own. This can lead to collaborative work which can be productive for all the shopping agents and lead to meeting the objectives faster than acting on its own. So, this can be multi-agent environment where there can be communications and collaborations taking place among the thousands of available internet shopping agents.

I think after the detailed explanation based on the above three examples and their respective environments, that our readers got a clear idea of how to determine an intelligent agent’s environment types. Now let’s move on to identifying what is an agent function and agent program.

 

Agent Functions&Agent Programs

An agent is completely specified by the ‘agent function’ which maps percept sequences that it receives through its sensors to actions. In AI, we have to know how to design an ‘agent program’ that eventually implements the above ‘agent function’. So in a nutshell, what the ‘agent program’ does is implementing a way of mapping each of the percept sequences that the agent receives from its sensors about the environment, to a action that the agent that t should perform ultimately to achieve its objective function.

Overall agent is a combination of the agent program and the agent architecture. For example, if the agent program implements an action called ‘move forward’, then the architecture needs to have certain mechanical objects such as legs, wheels or any motor which enables to move forward. In the case of our example of robot soccer player, for it to be able to kick and perform other actions required by a typical soccer player, the agent should have an architecture with robotic legs, arms, vision etc. The architecture can also be just a computer with keyboard input and screen if we are talking about the Mathematics tutor agent.

A typical agent program would take the percepts as inputs, append this to the percept sequences and look up the table which consists of percept sequence mapping to respective actions and output the action. Following is a high level pseudo-code which explains what a basic agent program should look like give that the percept sequence, action table is created in advance.

 

Practically, such program would not be implemented unless for very small and simple agents where the percept sequences are less and the environment is fully observable and static. Creating the entire look-up-table requires you to assume that the environment is static and fully observable.  Further, based on the number of percepts that the agent has and the number of possible percept sequences that it could get, the look-up-table size would grow drastically high causing it impossible to store and generating an impossible and tedious task to the designer of the agent. Practically since, most environments are not static, and fully observable as well as the designer of the agent would not know for sure prior to the implementation which percept sequence should cause which sequence, this approach is not feasible. Above is the simplest way of implementing an agent program which lacks sophistication.

 

Next in line

In general the main idea behind building intelligent agents is to meet a certain goal of each agent while makes it able for them to learn from the experiences while operating in their respective environment and apply those learning to get to the goal more effectively and efficiently. We would look at how AI goes on to achieve this task by looking at different types of agents and then how to convert them into learning agents. Following are types of agents which we can build to achieve certain tasks.

  • Simple reflex agents
  • Model-based reflex agents
  • Goal-based agents
  • Utility-based agents
  • Learning agents

 

We would look in to details of each agent type mentioned above and also about their differences in the next article. Await our discussion on more about intelligent agents, next month!!!

 

References

Artificial Intelligence – A modern Approach, Second edition, Stuart Russell&Peter Norvig

Comments

comments

Chathra Hendahewa is a graduate student at Rutgers –The State University of New Jersey since fall 2008 and currently in her second year of studies. She is studying towards her Doctorate in Computer Science and her major research interests are Pattern Classification with application to financial systems, Machine Learning and Cognitive Sciences. She was the first ever International Fulbright Science &Technology awardscholar from Sri Lanka. Chathra graduated from Faculty of Information Technology, University of Moratuwa in year 2007. She is also a passed finalist and a multiple gold medalist of CIMA (Chartered Institute of Management Accountants –UK). She has professional experience in working as a Business Systems Analyst for more than a year at IFS R&D and has completed an internship period in developing modules for‘Sahana-Disaster Management System’. She loves reading biographies, mysteries and science fiction and listening to music during her free time.

NO COMMENTS

Leave a Reply