Home Classroom

1160

Pay Per Click (PPC) Advertising on Search Engines

My last article on Digit Magazine about Search Engine Optimization received an overwhelming acceptance among some of the readers, and I thank everyone who contacts me via Twitter and email. In fact, SEO is becoming a widely discussed topic in Sri Lanka, and many local businesses are using Search Engine Optimization with a great level of expertise, which is a good trend in Sri Lanka. I hope I solved most of your SEO problems to the maximum I’m capable of.

From Search Engine Optimization; this issue we move onto another form of search engine marketing. This time we will discuss in detail about ‘Pay Per Click Advertising’ or PPC as we commonly refer.

When we say Pay Per Click advertising on search engines, we actually talk about the ‘Sponsored Links” appearing on your search engine result page.

PPC becomes handy, when it is highly competitive for you to reach the top ranks in organic results. For example, if you want your site to appear on top 5 organic results for the keyword “Cricket World Cup” you will have to compete with over 57,100,000 web pages, which requires an enormous SEO efforts. But, if you have a budget to spend; you can easily buy spot for you in the ‘sponsored links’ section of the SERP quite easily.

PPC advertising is by far the most widespread form of pay-for-performance search marketing. As it’s the most effective way for a search engine to make money, PPC is offered by almost every SE on Earth. However, the two most prominent providers of PPC advertising are Google and Yahoo.

Google AdWords

Google AdWords is the world leader in pay-per-click advertising. Currently it has more than 150,000 advertisers. The ads show up not only with Google search results, but also with Google partners such as AOL search, About.com and thousands of other websites that publish AdWords ads. Google has an interesting ad ranking system. It ranks ads not by the bid (the amount their owners are ready to pay for one click), but by the combination of the bid and the click-through ratio. This way, Google maximizes its revenue stream (since Revenue to Google = Bid x CTR x Views) and gives small advertisers an opportunity to effectively compete with big companies. A small advertiser cannot compete on the cost-per-click basis, but can successfully overcome any big company in terms of the click-through ratio.

AdWords ads can only contain 95 characters: 25 for the headline, then two 35-character-long description lines, and a visible URL field.

AdWords gives advertisers several options to target keywords: broad matching, exact matching, phrase matching, and negative keywords. The matching options define how close the search string entered by a user should be to a keyword selected by an advertiser. If the advertiser has chosen [cricket bats] as their keyword (square brackets mean exact match), their ad will be shown only if a user enters tennis ball into the search box. If the advertiser has chosen “cricket bats” (quotes mean phrase match), the ad shows up if a user searches for heavy cricket bats or used cricket bats or simply cricket bat.

Yahoo Search Marketing (YSM)

Yahoo! Search Marketing Solutions http://searchmarketing.yahoo.com (formerly Overture) is the second top player after Google in the field of PPC advertising. Overture (originally called GoTo) was the first company to offer PPC advertising and was later acquired by Yahoo. Yahoo ranks ad listings exclusively based on the bid amount; furthermore, if you get a top-3 listing with Yahoo, your ad will be prominently placed with MSN, Altavista, CNN, and other search and news portals.

Other PPC Engines

A PPC model is an excellent source of income for any search engine. Therefore, almost every search engine offers PPC advertising opportunities. Among the more prominent PPC Engines are:

I personally suggest anyone who is interested in PPC marketing, to start with Google Adwords. It’s easy to understand, and comes with a user friendly interface and a campaign structure.

Some key indices that you have to pay attention when doing PPC marketing are,

  • Max CPC Bid – Amount we are willing to pay maximum per click
  • CPC – Cost Per Click: Amount we actually spent per click
  • Average Position (Rank): The average rank at which our ad is displayed on Google sponsored listings
  • Impressions – No of times our ad delivered
  • Clicks – No of clicks we got
  • CTR – Clicks as a % of impressions
  • C/R – Transactions as a % of clicks
  • EPC – Earning Per Click: How much did you earn per click?
  • Cost Per Transaction

I believe, this article covered some basic introduction into what PPC advertising is, and how you can start campaigning on Google Adwords. In the next issue, I intend to focus on ‘Email Marketing’

Informed Search Strategies

In the last article on Artificial Intelligence which appeared on diGIT’s June 2010 issue, we discussed about different types of ‘Informed search’ strategies. Just to recall, this type of searching uses extra problem-specific knowledge which is beyond the actual problem itself in finding the best solution. It is said that informed search leads to finding better problem-specific solutions in an efficient manner when compared to the uninformed search. You might also remember that we have another name for this search known as ‘Heuristic search’. Last time we mainly discussed about two types of searches called the ‘Greedy Best-First Search’ and ‘A* Search’. Please read the last article to get more information about these if you have forgotten what they are all about.

Today, we would be discussing about some other heuristic search strategies available and also would try to understand about heuristic functions and how they can be constructed.

Types of Informed Search Continued
1. Best-First Search
  • a. Greedy best-first Search
  • b. A* Search

The above search strategies were discussed in the previous article.

2. Memory bounded heuristic search

As the name implies, this type of search tries to reduce the memory requirements while searching based on heuristics. With regard to A* search which we briefly described last time, one of its main drawbacks is the high amount of memory consumption. It has to keep all generated nodes at each iteration in the memory, so when we run this algorithm on a large scale problem, it will run out of memory causing trouble. Some of such memory-bounded algorithms are described in detail as follows.

a. Iterative-deepening A* search (IDA*)

In general, Iterative-deepening search is a step by step procedure where the problem space is expanded until a certain pre-specified limit has been reached. In the case of A* search the evaluation function is defined as follows.

f(n) = g(n) + h(n)

where;

  • g(n) – cost to reach the node
  • h(n) – cost to each node from the goal node

In this case, when we consider the iterative-deepening A* search the pre-defined limit for the iterative deepening search is an upper limit on the value of the evaluation function (f(n)). In the case where, a node in concern has a comparatively lower f(n) value to the cut-off limit, then that node would be expanded and if the algorithm reaches a node which has a f(n) value exceeding the limit, the algorithm will cease to expand on that node on that particular path and move on to finding other paths to reach the goal node. By imposing this kind of cut-off limit for the evaluation function cost, the nodes which are not feasible to be in the path would not be put on to memory and only the nodes within the paths which are feasible and cost effective would be stored in memory. This would lead to a lower memory requirement than using A* search on its whole.

b. Recursive best-first search (RBFS)

A recursive algorithm is one which calls the same algorithm within itself with different input arguments at each time until a certain objective function has been reached. RBFS is a simple such recursive algorithm which tries to do the operation of standard best-first search (which was explained last time) but using only a limited memory space. In this type of best-first search the evaluation function (f(n)) in concern is the combination of:

  • g(n) – cost to reach the node
  • h(n) – cost to each node from the goal node

RBFS algorithm keeps track of the evaluation function value (i.e. f(n) value) of the best alternative path available from any ancestor of the current node. If the current node’s evaluation function value exceeds this limit, the recursive step makes the algorithm take the alternative path instead. When such recursion occurs, RBFS algorithm replaces the previous f-value of each node along its path with the best f-value of its children. By doing this, the memory requirement is limited, because it does not need to keep all the expanded nodes in memory. It only needs to store the evaluation function values of best alternative path from each ancestor node of the current node and do a comparison with the current value and update it if required, making the memory requirement decrease. Although this looks quite feasible, there is another drawback in RBFS algorithm. That is it uses too little memory and is not capable of utilizing all the available memory if required. So, in another sense, it wastes available memory in a system since this algorithm cannot utilize it but rather use very limited memory to perform its operations. Therefore, some other algorithms have come in to play which use limited memory but also have the ability to utilize all available memory, which are described in detail in the following sections.

c. Simplified Memory-bounded A* search (SMA)

SMA algorithm works quite similar to the general A* search algorithm with the only exception being, it expanding the best leaf only until the memory is full. When it reaches the point of memory being full, it has to drop an existing node in memory to fill in the new one. In order to do this, SMA algorithm follows a simple strategy where it drops the worst leaf node already in memory and adds in the new node. ‘Worst leaf node’ means the leaf node in the memory which has a highest evaluation function value which is f(n), which implies it costs more to expand on that node in order to reach the goal. Therefore, this algorithm removes it and adds the better node in the memory. There might be a scenario where all the leaf nodes have the same f-value. In such as case, how does the SMA algorithm select which node is the worst and which node is the best since same node could be selected for both expansion and deletion? The solution is as follows. SMA* solves this by expanding newest best leaf and deleting oldest worst leaf, which means it deletes the node with the same f-value but was in memory for the longest time and adds the latest node. In this type of SMA search, all the limited amount memory that is available would be used to tackle the problem without wasting any resources.

Heuristic functions

The heuristic function is based on the problem in concern. If the goal of a certain problem is finding the lowest distance path from one place to another, then the heuristic function is obviously the lowest distance from each node to the goal node. The problems in practice are not so simple and obvious most of the time. Therefore, it is essential to know some important aspects that should be considered when coming up with a good heuristic function to do the searching on that problem space. Some important pointers when designing a heuristic function are as follows.

  • It should never overestimate the true solution cost to the goal – For example, if the most feasible path to the goal node from the starting node costs ‘x’ no: of steps, then the heuristic function should not exceed that limit from any node for it to be a suitable heuristic function.
  • The quality of the heuristic mostly depends on the ‘effective branching factor’ value – Following is the definition for effective branching factor. If the total number of nodes generated by A* search for a problem space is N, and the solution tree depth is d, then the effective branching factor b is defined using the following equation

N + 1 = 1 + b + b2 + b3 + … … … … … … . + bd

Therefore after removing 1 from both sides of the equation;

N = b + b2 + b3 + … … … … … … . + bd

The above is a geometric series. So to find the sum of a geometric series the equation is as follows. Hope you all can remember your year 10-11 Mathematics for doing this.

Where r – common ratio, n – no: of terms

Therefore in our equation no: 2 can be re-written as:-

Then an approximate solution for b can be obtained mathematically, which is called the effective branching factor. It is found out that that for a well-designed heuristic, the value for the effective branching factor would be close to 1.

Stay tuned!

Over the past few months and from today’s article, we discussed a lot about searching strategies available in problem solving which is a vital component in Artificial Intelligence. We learnt about problem solving basics, formulation of problems and solutions, uninformed (blind) search strategies and informed (heuristic) search strategies throughout this article series. Hope you were able to gain some understanding and develop an interest towards this area which paves the basics to AI. From the next article onwards, we will be presenting more about new machine learning techniques in AI such as neural networks, classification and clustering and so on. So, stay tuned!

References
Artificial Intelligence – A modern Approach, Second edition, Stuart Russell & Peter Norvig

1195

Search Engine Optimization

If you do a Google search for the keyword “world cup knockout table” you will find a page from my personal website (www.amisampath.com) ranked among the top 5 results of first search engine result page, out of 1,320,000 competing web pages indexed by Google for this keyword. During this first few days of FIFA world cup’s knock out stage, I am getting more than 2,000 visitors to this web page from around the world.

Is this accidental? No. This page is part of one of my experiments about Search Engine Optimization. I purposely picked the keyword “world cup knockout table” and created page targeted for people who will potentially search for this keyword when the knockout round of the world cup starts. Then I applied some general SEO tactics, and few of my own experiments to see what would be the outcome. The result had been phenomenal, as you can see from the Google Analytics screenshot below.

This is only a simple demonstration about the power of SEO, when driving free of charge traffic to your website. As I mentioned earlier, the above example is from an experiment I did for getting “quick results” in SEO. I will now consider practically implementing the tactics I used here, to achieve serious commercial objectives for my employer.SEO is an essential element in your online marketing strategy. Reason is, without having a good strategy for SEO, you cannot gain a proper visibility for your website on the internet.

SEO is aimed at increasing the rank of your website so it shows up in the top ten of a search engine results page (SERP). Since these top positions are the mostly clicked on in a search, it will incredibly increase traffic to your site. Getting to the #1 position is most difficult, but if you get there, that will earn you huge amount of website traffic. SEO is one of the most effective ways to drive traffic to your website.

But the question that most people don’t have an answer, and most businesses are spending millions of dollars every year to find an answer is “how can I get to one of these top ten positions?”.

It works like this, if I explained in the simplest language. Search engines look through a directory of websites and find the website that seems to best match the search term used by the user. If you are searching for “Cricket World Cup 2011“, the engine will look through its directory for websites with those terms. When it finds the terms, it weights those terms based on several factors. For example, where these keywords are placed (domain name, metatags, title page, headers, links, body paragraphs, etc. Headers and titles take precedence over body paragraph instances); How close the terms are together (if there are multiple terms, such as “World Cup” =2 search terms: “World” “Cup”). It then adds up all the weighted scores from that website and ranks the websites accordingly based on how well they fit the bill. It then displays the results, with the top-ranking sites getting the first page, and the other sites allocated to the dumps, where hardly anyone ventures. This is how search engines rank the sites.

If search engines are ranking web pages according to a pre-defined algorithm (which includes certain qualification criteria), then we can possibly make our web pages to match that algorithm, and get our pages ranked top on search engine’s organic results. This is SEO in basic language. But, it isn’t as easy as that. There are certain pitfalls to avoid and best practices to follow. Above all, there is no one on this earth, who knows exactly the all criteria included in Google’s algorithm. It’s all guesswork, based on some hints passed by top influential people at Google.

There are some fundamental truths in SEO and it is fair to say that search engines today consider the following when ranking a given web page:

  • The content of the page – what it’s about, what words are used prominently.
  • What words are used in the title of the page?
  • What words are used in the URL of the page?
  • What words are highlighted on the page?
  • What internal links (links from other pages on the same site) point to it?
  • How many external links point to it and more importantly, whether those pages are relevant to the page’s subject matter?
  • The text used to form the internal and external links.
  • Even the age of the domain name plays a role in its ranking!

There is a lot more involved of course and that list could have gone on for a while. Factors such as keywords, the use of images and Flash animations, and the design of the site itself also play a role in a page’s ranking.

As I mentioned at the outset, SEO is not an easy topic to be comprehended in a one article like this. If you are interested in getting information on how to learn more about SEO, and specially to know some of the unpublished strategies used for optimizing your website, you can always drop me an email at amitha [ at ] amisampath [ dot ] com. I will direct you to more informative sources for you to learn SEO and use it for your practical purposes.

1219

Part 8 – Moving from Drupal 5 to 6

As Drupal 6 becoming a solid platform and most of the contributed modules now supporting the new platform, one might need to migrate his older Drupal 5 site to 6. In this episode we would be looking at key changes in Drupal 6 that should be considered when adopting the new platform.

Schema API

In Drupal 6, a new Schema API has been introduced to allow modules to declare their database tables in a structured array replacing the need to write raw SQL queries in older versions. This schema structure is similar to how Form API operates and provides API functions for creating, dropping, and changing tables, columns, keys, and indexes. Further, this schema provides a convenient, DBMS independent system to define your tables.

With the introduction of Schema API, a new hook to integrate schema have been introduced as well. You can implement hook_schema in your module install script to use this new awesome improvement.

This is an example of how a simple Drupal schema definition will look like inside the hook_schema.

New Formats for hook_install and hook_uninstall

The format of hook_install has been changed to the following. Instead of writing SQL queries within your hook_install, just call drupal_install_schema() to invoke hook_schema and setup your tables.

In hook_uninstall use the following format.

New Syntax for .info Files for Modules

Now you can define your dependencies in your .info file in the module. Also, you can define the Drupal version compatible with your module as well as PHP compatibility.

name = Forum
description = Enables threaded discussions about general topics.
dependencies[] = taxonomy
dependencies[] = comment
core = 6.x
php = 5.1
Introduction of .info Files for Themes

Now you can define your regions simply inside your new .info file in the template directory. The format would be as follows.

name = mytheme
description = MyTheme description
version = VERSION
core = 6.x
engine = phptemplate
regions[left] = Left sidebar
regions[left_extra] = Left extra sidebar
regions[content] = Content
regions[content_extra] = Content Extra

Using Theme Registry

If you want to override a theme used in a hook through one of your templates, here is how you do it. Override hook_theme in your theme’s template.php file and follow the given format. Here search_form is the hook and mytemplate.tpl.php is the custom template that will be used to theme the output from the hook. For more complex overriding behaviours, study the hierarchy of Drupal theming system given in Figure 1.

Figure 1: Drupal theming hierarchy
(Source: About overriding themable output – http://drupal.org/node/173880)

Introduction of Preprocessor Functions

Preprocess functions applies only to theming hooks implemented as templates and the main role is to setup variables to be placed within the template (.tpl.php) files. This is a good attempt at removing the ugliness of _phptemplate_variables($hook, $vars) in older versions.

If you want to add/modify variables introduced through a hook named foo and if your theme is named ‘mytheme’ then you do the following.

Template Suggestions

Last but not least, the template suggestion system in Drupal 6 is worth a look at. For custom suggestions, you have to define your theme_preprocessor_page function in your theme and add template suggestions/variables that would be passed to the page template.

The following code will add template suggestions to the Drupal rendering system so that all the node renderings would look for a template named “page-node-{content_type}.tpl.php”

More, more and more!

This episode only touched the tip of the iceberg that is Drupal 6. For more information on changes from Drupal 5 to 6, check Drupal 6 theme guide (http://drupal.org/theme-guide/6) and Drupal 6 module migration guide (http://drupal.org/node/114774).

SQLite databases are the recommended way of storing quantitatively important data on the Android operating system. Because of the way user focus is organized on this platform, there are several kinds of contexts databases can be used in, e.g. activities (Activity class), services (Service class) or widgets. The first two inherit the Context abstract class, while context references are passed into the latter, but these, along with optimizations, will be discussed in the next sections.


  1. Introduction

Context is actually an interface to global information about the application environment, the implementation being provided by Android itself. The existence of Context references helps us determine their scope and even what we can do to optimize classes which are constructed using such references [1]. An Activity is, in a large sense, the logical unit of users’ interaction with the phone. Usually, only one Activity can have focus [2]. A Service is similar, but it doesn’t target the UI and is designed for longer-running operations; it’s a long-lived component of an Android application [3].

SQLite is an easy to use database system; there are certain helper classes to aid in database creation and version management, such a class being SQLiteOpenHelper [4]. You have to keep in mind that your app’s users might have different database versions when upgrading to the latest app version, so managing structural changes by implementing the onUpgrade(…) method is recommended.


 

2. Modelling & Creating the Database

Figuring out the best object-oriented model can depend on various factors. For example, you might want to focus on so called real-time usage needs, or you might want a trade-off with security features. Before explaining this choice, we first need to know how the database is created [5]:


  private class DatabaseHelper extends SQLiteOpenHelper {

    DatabaseHelper(Context context) {
      super(context, DB_NAME, null, DATABASE_VERSION);
      mCtx = context;
      res = context.getResources();
    }

    @Override
    public void onCreate(SQLiteDatabase db) {
      Resources r = res;

      // creating the tag and entry tables and inserting a default tag
      db.execSQL(“CREATE TABLE “ + DB_TAG_TABLE + ” (“ + KEY_ROWID
          + ” integer primary key autoincrement, “ + KEY_NAME
          + ” text not null);”);
      db.execSQL(“INSERT INTO “ + DB_TAG_TABLE + ” (“ + KEY_NAME
          + “) VALUES (‘” + r.getString(R.string.tag1) + “‘)”);
      upgrade(db);
    }
  }


As can be seen above, actual SQL code is passed to the execSQL method of the wrapper class. In fact, this snippet of code already contains an important optimization: the general isolation of upgrade operations which can be replicated for future users without being explicitly included in the SQL creation code. Details about the advantages of a custom upgrade method will be discussed in the following sections.

Classes such as DatabaseHelper are usually wrapped in other classes, becoming an inner class of the wrapper. This way, other helper methods can be implemented around the database helper. Before it’s actually used, it’s important to know that an SQLiteDatabase will be doing the hard work. An instance of it can be optained from the mentioned database helper, in an open open method (context use can be optimized) [5]:

  public ToDoDB open() throws SQLException {
    mDbHelper = new DatabaseHelper(mCtx);
    mDb = mDbHelper.getWritableDatabase();
    return this;
  }


Similarly, a closing method is needed – either for releasing resources or for freeing the database up for a lower level handling (more about the latter in the following sections) [5].
  public void close() {
    mDbHelper.close();
  }


But first an object model should be established. Some developers often execute their queries surrounded by the opening and closing methods. Thus, the default state, in that case, is a closed database. In theory, this increases security and modularity, also allowing for a permission system within a single app. This approach might be excusable with sparse accesses, but what is the CPU impact when these accesses occur periodically, or following every user action? Keep in mind that CPU activity translates not only into low response times but also into battery consumption on mobile phones. Therefore, if performance becomes an issue, and it rarely doesn’t, an open state should be considered default. The database can then be opened in the onCreate or onResume events of the activity, and closed in onDestroy. Similar events could be implemented / overridden in services and widgets, but this raises a lifecycle issue. Database instances will be different in these cases and will only live as long as the contexts will, but their cohabitation could cause problems. The singleton pattern can be employed here: if our database wrapper becomes a singleton, it will only have one true instance, forcing references (and not instances) to have a short life. This also eliminates the ultimately redundant overhead of creating parallel instances which do the same thing.
However, a Context needs to be passed to the constructor, and passing short-lived contexts to long-lived singletons wouldn’t be appropriate. This is why instead of using the local context, getApplicationContext() should be called on it; this will return the context of the single, global Application object of the current process. An instance of our own database wrapper would be obtained like this:
  sDbHelper = ToDoDB.getInstance(getApplicationContext());


And the public static getInstance function would look like this (the actual constructor should remain private, as it is a singleton):
  public static final ToDoDB getInstance(Context c) {
    return sInst != null ? sInst : (sInst = new ToDoDB(c).open());
  }


This approach relies on Android to close the actual database, since explicitly closing it, for example, in a main activity, might prevent the app’s service to use it, or other such combinations. This way, the database will be closed only if it isn’t used anymore.


Fig. 1: Non-Singleton vs. Singleton approach

 

  3. Constraints

Some features might have certain constraints. For example, an app might offer the possibility to import a database from an external source. If this happens at a filesystem level, by overwriting files, the database will first have to be closed (released). After copying the files, it must be reopened, so it’s available to the app again [5] (the copy method can be implemented using standard Java):


  public static final void importBackupSD(final Context c) {
    sDbHelper.close();
    try {
      Utils.copy(new File(“/sdcard/Tag-ToDo_data/database_backup”), new File(
          “/data/data/com.android.todo/databases”));
    } catch (Exception e) {
      Utils.showDialog(R.string.notification, R.string.import_fail, c);
    }
    sDbHelper = ToDoDB.getInstance(c);
  }


Databases have, of course, different versions (e.g. when the app adds a feature, it might also need to change or add fields). This can be done in onUpgrade [5]:

    @Override
    public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) {
      // upgrade to db v74 (corresponding to app v1.2.0) or bigger;
      // 4 columns need to be added for entry dates (and other possible
      // future extra options).
      if (oldVersion < 74 && newVersion >= 74) {
        try {
          db.execSQL(“ALTER TABLE “ + DB_ENTRY_TABLE + ” ADD “
              + KEY_EXTRA_OPTIONS + ” INTEGER”);
        } catch (Exception e) {
          // if we are here, it means there has been a downgrade and
          // then an upgrade, we don’t need to delete the columns, but
          // we need to prevent an actual exception
        }

        try {
          db.execSQL(“ALTER TABLE “ + DB_ENTRY_TABLE + ” ADD “ + KEY_DUE_YEAR
              + ” INTEGER”);
        } catch (Exception e) {
        }

        try {
          db.execSQL(“ALTER TABLE “ + DB_ENTRY_TABLE + ” ADD “ + KEY_DUE_MONTH
              + ” INTEGER”);
        } catch (Exception e) {
        }

        try {
          db.execSQL(“ALTER TABLE “ + DB_ENTRY_TABLE + ” ADD “ + KEY_DUE_DATE
              + ” INTEGER”);
        } catch (Exception e) {
        }
      }

      // upgrade to db v75 (corresponding to app v1.3.0) or bigger;
      // a column needs to be added for written notes
      if (oldVersion < 75 && newVersion >= 75) {
        try {
          db.execSQL(“ALTER TABLE “ + DB_ENTRY_TABLE + ” ADD “
              + KEY_WRITTEN_NOTE + ” TEXT”);
        } catch (Exception e) {
        }
      }
    }


However, certain SQLite exceptions might occur. For example, if a database with an older version has been imported, and the app, of a newer version, queries for nonexistent field, there will be a force-close (the app will crash). The code above is useful in this situation as well because it allows us to easily repair the database by forcing missing subsequent upgrades until the current version is reached; onUpgrade being a method, not just an event, can be called programatically and, furthermore, its signature allows for successive and independent calls. These calls can be grouped in a method implemented in our DatabaseHelper class:

    public void upgrade(SQLiteDatabase db){
      onUpgrade(db, 73, 74);
      onUpgrade(db, 74, 75);
      onUpgrade(db, 75, 76);
      onUpgrade(db, 76, 78);
      onUpgrade(db, 78, 79);
      onUpgrade(db, 81, 82);
      onUpgrade(db, 85, 86);
      onUpgrade(db, 91, 92);
    }


Repairing the database can be done in 2 ways, one simpler and one more complicated. The simple way involves calling the upgrade method and ensuring it is called only once upon the exception, so as not to enter an infinite loop in case the DB version isn’t the cause. This way, the database self-repairs based on its own design. A more full-proof way of handling such exceptions could be generating and executing SQL commands automatically based on the exception text, by extracting the names of problematic fields or tables and adding them to the data with a default value, but the extra effort in this second case might not be worth it.

  4. Flags

Using flags can be a straightforward design decision, but their advantages are worthwhile to mention. Flags usually have a passive nature, meaning that on change they don’t execute a particular action. They can be a very useful tool to customize user interaction by affecting query returns (e.g. to-do list users might want to sort their tasks, some alphabetically, others by priority, etc., which would be done by optionally appending an ‘ORDER BY’ clause to the SQL code). Setters can be implemented in our own wrapper while its inner methods can access it directly.


5. Events & Synchronization

Analyzing the mentioned ideas from the synchronization point of view reveals a few differences as well. When the default state is closed, you can consider the system synchronous (even if minimally) because specific methods need to be called before and after accessing the database (e.g. open and close). In other words, the application can’t unconditionally access at will, no matter the calling location, unless it opens the database first. With an open default state designed after the singleton pattern, this system is basically asynchronous, eliminating the necessity of calling the closing method and allowing any type of access from any location.

Although remotely related to databases from the documentation point of view, the connection with the UI and the way events propagate are very important, especially since some can be used as so called control signals. Events flow in a bottom up manner, and can be consumed (stopped) or allowed to continue along the propagation chain through the boolean return value, as seen in the following diagram:



Fig. 2: Event propagation


The way events can be used as control signals is translating user action into different use for the same UI elements. For example, a LinearLayout could be used to list all the tasks in a to-do list, but the same layout would be the best choice for enabling a task choice (e.g. moving a task under another task). Another example would be using a Spinner dialog for a similar double purpose: choosing tags from a list to show their content or choosing them to specify them as new hosts for previously selected tasks.




Fig. 3: Control signal example


Where preconditions and postconditions are concerned, there aren’t any except the ones imposed by design in the actual access methods implemented by the developer in the wrapper. Also, there aren’t any data type compatibility issues if parameters are properly included in the SQL or if Android’s special classes are used [5]:


 

  final ContentValues args = new ContentValues();
              args.put(KEY_NOTE_IS_AUDIO, 1);
              db.update(DB_ENTRY_TABLE, args, KEY_NAME + ” = ‘” + taskName
                  + “‘”, null);


This example updates a field which satisfies a certain condition with a new integer value.


6. Optimizations

One of the possible optimizations is using an abstract mother class for different kinds of tailored database wrappers. For example, in an alarm listener it might not make any sense to instantiate the entire wrapper with all its feaures; maybe a stripped down version would work better.

Another optimization is encoding multiple values into a single one. This can be done not only for security reasons, but for decreasing the amount of stored data (extra effort will be transfered to CPU). For example, a date – composed of year, month and day – can be encoded as an integer value:


 

  public int getDueDate(String task) {
    final Cursor entry = mDb.query(DB_ENTRY_TABLE, new String[] { KEY_ROWID,
        KEY_NAME, KEY_DUE_YEAR, KEY_DUE_MONTH, KEY_DUE_DATE }, KEY_NAME
        + ” = ‘” + task + “‘”, null, null, null, null);
    // for now, assuming we have a task named like this
    entry.moveToFirst();
    final int e = 372 * entry.getInt(entry.getColumnIndex(KEY_DUE_YEAR)) + 31
        * entry.getInt(entry.getColumnIndex(KEY_DUE_MONTH))
        + entry.getInt(entry.getColumnIndex(KEY_DUE_DATE));
    entry.close();
    return e;
  }


You can then decode the needed part using the DIV or MOD operators (/ and % in Java). 


7. Conclusion

SQLite databases on Android are a modern way of managing data on phones. Because of the multiple purposes mobile devices have, wrapper and helper classes have been and can be created as part of the platform to ensure high quality data management. Also, the Android operating system is constantly evolving providing compiler and performance optimizations that improve database use even further.


References

[1] Android: Context [WWW] http://developer.android.com/reference/android/content/Context.html

[2] Android: Activity [WWW] http://developer.android.com/reference/android/app/Activity.html

[3] Android: Service [WWW] http://developer.android.com/reference/android/app/Service.html
[4] Android: SQLiteOpenHelper [WWW] http://developer.android.com/reference/android/database/sqlite/SQLiteOpenHelper.html

[5] Filimon T.: Tag-ToDo-List [APP] http://code.google.com/p/tag-todo-list

1576

Change Your Joomla Database Login

This article we will show you how to change or verify the database login to your Joomla site. You will sometimes need to do this if:

  • Their site can’t access the database
  • They’ve been hacked and need to make the site more secure

Login to Your Hosting Account

 

Access Your Database Area

 

Create a New Database User

Be sure to take a careful note of your password.

 

Add the New User to Your Database

Make sure to choose the correct user and the correct database, then click “Add”.

 

Give the New User All Privileges

 

Login to Your File Manager

 

Open Configuration.php

This is the file that connects your Joomla files to your database. It needs to have the correct username and password.

 

Update the User and Password Details

Scroll down until you see the $user and $password fields. Insert the information you created earlier. Then click “Save” and check the front of your site to ensure everything is working correctly.

Your valuable comments after reading this article will be highly appreciated. Also, if you have any issues in joomla you could email to harsha@vishmitha.com or Supporting Forum to get the assist.

1180

Understanding Search Engine Marketing

After the first few lessons covering the basics of internet marketing, now we are ready for the next level of understanding internet marketing. We now move on the most discussed area of “Search Engine Marketing”.

Search engine marketing mainly involves “Search Engine Optimization (SEO)”, and Pay Per Click advertising (PPC). The objective of search engine marketing is to make sure a link to your website is placed in the first search engine result page (SERP) of major search engines, for queries (keywords) related to your business. For example imagine that you own a small budget hotel in Colombo. When someone search for “budget hotels in Colombo” on Google, if your website is not shown up in the first page of the SERP, you will lose a whole lot of potential customers.

In order for you to place a link to your website on Google’s SERP, you can choose two options.

  1. Sign up with Google Adwords and place sponsored links (PPC)
  2. Optimize your website to Google’s search algorithm, and make sure your link appear on the organic results section of the SERP (SEO).

Search engines are the most popular method for target customers to find you. As such,

SEs are the most vital avenue for letting customers find you.

Currently, search engines around the world together receive around 400,000,000 searches per day. The searches are done with the help of keywords: as a rule, people type a short phrase consisting of two to five keywords to find what they are looking for. It may be information, products, or services.

In response to this query, a search engine will pick from its huge database of Web pages those results it considers relevant for the Web surfer’s terms, and display the list of these results to the surfer. The list may be very long and include several million results (remember that nowadays the number of pages on the Web reaches 2.1 trillion, i.e. 2,100,000,000,000); so the results are displayed in order of their relevancy and broken into many pages (most commonly 10 results per page). Most Web surfers rarely go further than the third page of results, unless they are considerably interested in a wide range of materials (e.g. for a scientific research). One reason for this is that they commonly find what they look for on those first pages without the needing to dive in any deeper.

That’s why a position among the first 30 results (or “top-30 listing”) is a coveted goal.

There used to be a great variety of search engines, but now after major reshuffles and partnerships there are just several giant search monopolies that are most popular among Web surfers and which need to be targeted by optimizers.

There are – and the search engines are aware of this – more popular searches and less popular searches. For instance, a search on the word “myrmecology” is conducted on the Web much more rarely than a search for “Web hosting”. Search engines make money by offering special high positions (most often called “sponsored results”) for popular terms, ensuring that a site will appear to Web surfers when they search for this term, and that it will have the best visibility. The more popular the term, the more you will have to pay for such a listing.

The term “search engine” (SE) is often misused to describe both directories and pure search engines. In fact, they are not the same; the difference lies in how result listings are generated.

  • crawler-based (traditional, common) search engines;
  • directories (mostly human-edited catalogs);
  • hybrid engines (META engines and those using other engines’ results);
  • pay-per-performance and paid inclusion engines.

Crawler-based SEs, also referred to as spiders or Web crawlers, use special software to automatically and regularly visit websites to create and supplement their giant Web page repositories. Human-edited directories are different. The pages that are stored in their repository are added solely through manual submission. The directories, for the most part, require manual submission and use certain mechanisms (particularly, CAPTCHA images) to prevent pages from being submitted automatically. After completing the submission procedure, your URL will be queued for review by an editor, who is, luckily, a human. Then, what are hybrid engines? Some engines also have an integrated directory linking to them. They contain websites which have already been discussed or evaluated. When sending a search query to a hybrid engine, the sites already evaluated are usually not scanned for matches; the user has to explicitly select them. Whether a site is added to an engine’s directory generally depends on a mixture of luck and content quality. Sometimes you may “apply” for a discussion of your website, but there’s no guarantee that it will be done.

SEO (search engine optimization) is the solution for making your page more search-engine friendly. The optimization is mostly oriented towards crawler-based engines, which are the most-popular on the Internet.

PPC is most often used, when businesses find it time consuming and tedious task for optimizing their website for search engines. Rather than taking that “difficult route”, companies advertise their links on PPC channels such as Google Adwords, Yahoo Search Marketing or Bing AdCenter.

(Image URL: http://lh3.ggpht.com/_Ty0HUPmsLIA/S_FlwSAkbDI/AAAAAAAAAwc/UQscESra_MM/s912/SERP.JPG)

 

1219

Blog style view – Daily blogs

This Joomla layout override article shows you how to modify the blog style views for articles to show the “day” date above all the articles posted in the same day. After that, each article on the same day will just display the time they were posted as shown in the screenshot.

The steps are easy to follow or you can just download the files via the link at the end of the article.

We are going to do our layout override example on the “category” view in com_content – that’s the component that controls the way all your articles display on your site. We don’t want to modify the original files because we risk losing our changes each time we upgrade the Joomla source files. So what we do is copy the original blog layout file to a layout override folder in the default template you are using for your Joomla site, as follows:
Copy: /components/com_content/views/category/tmpl/blog_item.php

to: /templates/rhuk_milkyway/html/com_content/category/blog_item.php

Open the new blog layout override file, blog_item.php, in your favourite editor and change the top of the layout override file to the following:
<?php // no direct access
defined('_JEXEC') or die('Restricted access');

// Set up some variables
$fDate = JHTML::_('date', $this->item->created, '%B %d, %Y');

// Remember which dates we have used
if (!isset($this->usedDates)) :
$this->usedDates = array();
endif;

// Have we already shown it?
$showDate = !isset($this->usedDates[$fDate]);

// Now set that we've used it
$this->usedDates[$fDate] = true;
?>

What we are doing in this block is storing a new, formatted “day” date in the variable $fDate. You can change the way for date looks by changing the third argument. See the PHP Manual for strftime for other things you can add to the formatted date (you want to concentrate on the table with all the letters with a %-sign in front of them – it’s a bit weird but just go with it).

Next we are setting up an internal view variable called userDates to track the dates that we have displayed. After that we work out if we are to show the date and put that in a variable called $showDate.

To start putting this together, find <table> tag a few lines down. If we are displaying the date, we will be adding a new row to the table, as follows:

That will display the “day” date above the first article in the “day”.

Lastly we want to change the article date that is displayed to only show the time. Go down one hundred or so lines in the override file and find this line:
<?php echo JHTML::_('date', $this->item->created, JText::_('DATE_FORMAT_LC2')); ?>

You need to change it to this:
<?php echo JHTML::_('date', $this->item->created, '%H:%M %Z'); ?>

You can adjust any of the code in the layout override outlined above to suit your own requirements or your own template. Most templates won’t be displayed with in a table (generally considered bad form nowadays) so you won’t have to worry about putting the “day” date in a table row. Place it in a div or whatever other tag you are happy with. Likewise, change the h3 tag to whatever suits.

The Joomla “section” and “frontpage” views can also be altered in a similar fashion with separate layout override files. For best results you would use a single column display. It might look at bit weird in two or more columns. I’ve zipped all the Joomla layout override files for easy download. To install them, unzip the files into the /html/ folder in your default Joomla template. Remember to backup any files and folders that already exist (just in case).

Your valuable comments after reading this article will be highly appreciated. Also, if you have any issues in joomla you could email to harsha@vishmitha.com or Supporting Forum to get the assist.

The next article will discuss about Database in Joomla.

Artificial Intelligence : Informed search

In the previous article on Artificial Intelligence which appeared on diGIT’s April 2010 issue, we discussed about search strategies for problem solving. There we talked about the main two strategies in search called ‘uninformed search’ and ‘informed search’ and discussed in detail only about ‘uninformed search’. Today we are going to discuss about what is meant by ‘informed search’. As the name implies, it uses problem-specific knowledge in which is beyond the actual problem itself in finding the best solution that problem in hand unlike the previously discussed ‘uninformed search’ where no additional knowledge about the problem has been provided. Informed search leads to finding better problem-specific solutions in an efficient manner when compared to the other search strategy mentioned before. Informed Search is also known as ‘Heuristic search’ which shows that this type of search strategy uses heuristics (cues) about the problem domain to come up with the most suitable solution.

Types of Informed Search
1.Best-First Search

This is a general graph search algorithm. At each node of the graph it should evaluate which node to be expanded next in order to arrive at the optimal solution efficiently. This is achieved by the introduction of an ‘evaluation function’ which evaluates at each point, which node to be expanded next based on the lowest evaluation function result. Usually, the evaluation function mentioned above measures the distance from the node in concern to the goal, so a lower evaluation function is preferred over a higher evaluation function. Another key factor in this search algorithm is called the ‘heuristic function’ which is a part of the evaluation function, which is defined as the ‘cheapest cost of the cheapest path from a node to a goal node’. Therefore, this algorithm chooses the lowest cost paths from the selected best nodes of the graph to arrive at the goal node, leading to the best solution and the heuristic function value would be zero at the goal node, which acts as a stopping condition for the algorithm to terminate when it reaches the goal.

Based on how the ‘evaluation function’ and ‘heuristic function’ is linked together, there can be different variations of the above mentioned ‘Best-First Search’ strategy, which are explained in detail as follows.

1.1.Greedy Best-First Search

This search method expands the node that is closest to the goal node which is likely to lead to a solution in a quick manner. In this case, the ‘heuristic function’ (h(n)) and the ‘evaluation function’ (f(n)) are the same. One can come up with different ‘heuristic functions’ based on the problem domain in concern and apply it.

Here:- f(n) = h(n)

Let’s first consider a small example which consists of distances between cities and where a tourist wants to find the path that he should travel in order to minimize the distance he travels from city ‘A’ to city ‘N’.

The following table shows the distances in kilometers (approximate road distance) from the destination city ‘N’ to each respective city.

From City Road distance (km) From City Road distance (km)
A 450 H 300
B 345 I 320
C 420 J 50
D 350 K 220
E 175 L 75
F 250 M 200
G 280 N 0

Let’s also assume that the existing roads connecting the above cities are as follows.

Now let’s try to see how to perform greedy best-first search to go from city A to city N. So starting from city A, one can go to either city B, H or D according to the above road map.

Since the heuristic is based on expanding the node with the lowest distance, in the above first step, the algorithm would select to go to city H from cist A since it has the lowest road distance. Then it would see which city to move from H.

Since the lowest distance is to city K, it would move to that city next as shown above. Based on the heuristic function of expanding the nodes based on the lowest distance to goal node, this algorithm would move on until it reaches the destination city ‘N’ as shown below.

1.1. A* Search

This is another best-first search strategy which has a different relationship with the evaluation function and the heuristic function when compared to afore explained ‘greedy best-first search’. Here the evaluation function (f(n)) is defined as the combination of cost to reach the node (g(n)) and the cost to each node from the goal node (h(n)).

Here:- f(n) = g(n) + h(n)

Therefore, this search strategy not only uses the heuristic function which consists of distances to each node from the goal node (h(n)) but also uses the path cost (so far travelled cost) from start node to the node in concern (g(n)) in the evaluation. This appears to be a better heuristic which uses the cue that the distance travelled should be minimized along each possible path.

The search continues…..

In this article, we briefly discussed about some informed (heuristic based) search strategies applicable to problem solving. Hope you all got a general idea as to how this strategy is different from the uninformed search strategies described in a previous article and also got a sense of how different heuristic based search strategies work. We would continue to talk about some more heuristic based search strategies and how to come up with heuristic functions to evaluate those in the next article.

1232

1.Introduction

In the last article, we looked at Stack, which was a LIFO (Last In First Out) structure. What this means is that the element which goes in last (is on the top), comes out first (when popped). We will now have a look at data structure called Queue, and its variants.

2.Queue

Recollect how a Stack had a ‘top’. Queue is a data structure with two points, a ‘front’ and a ‘rear’. Elements are added to the rear and removed from the front. Thus, the elements are removed in the order of their respective insertion into the queue. Hence, it can be said that the Queue is a FIFO (First In First Out) data structure.

2.1 Operations

Thus, there are two basic operations in a queue:

i) Enqueue: To insert an element at the rear of the queue. The complexity of this operation is expected to be O(1).

ii) Dequeue: To remove an element from the front of the queue. The complexity of this operation is expected to be O(1).


A queue and its operations.

2.2 Implementation

As with a stack, a queue can also be represented using either an array or a linked list of elements.

Thus, we can see that the implementation should be fairly simple to understand if you have grasped the concept of linked lists. The array implementation of the queue has two pointers, front and rear which point to indices within the array, which act as the front and the rear. In the dequeue operation, the front pointer is incremented by one, and in the enqueue operation, the rear is incremented by one. However, once rear reaches the fixed boundary of the array, it can no longer be incremented, (even though there may be some free space in the array) until certain special adjustments are made.

To use the array without any problems, we implement a slightly different way of manipulating the front and rear pointers. This concept is known as a Circular Queue or in general, a Circular Buffer.

The array is considered to be a ‘circular’ one, joined end-to-end. So, once the rear reaches the boundary, it simply goes to the next position which is the beginning of the array (provided front does not lie there), and this process can go on until rear meets front.


A Circular Queue

Similarly after repeated deletions, the front may approach the boundary too, and can similarly wrap around too.

2.3 Other types of Queues

Double-Ended queue allows queuing and dequeuing to happen both at front and rear. It is also known as deque (pronounced as deck). Complexity of both the operations remains O(1).

Priority queue demands elements to be added with a ‘priority’. When, an element is to be removed from the front, the element with the greatest priority is removed first, regardless of when it arrived. A simple-implementation is O(N) for Enqueue and O(1) for Dequeue if a list of elements sorted by priority is maintained, where N is the number of elements in the queue. An O(1) for Enqueue and O(N) for Dequeue is also common when a sorted list is not maintained, and the element with the highest-priority is looked up in the list at the time of dequeuing.

However, common implementations use a special data structure called Heap, which gives O(log N) insertion and removal complexities.

2.4 Standard Implementations

Queue is implemented in C++ STL (Standard Template Library) using the queue template. Functions for pushing (queuing) and popping (dequeuing) at both front and back are, push_front, push_back, pop_front, and pop_back. There are implementations of Queue in most modern languages, however, one is expected to be comfortable with implementing them without external help.

3.0 Tasks for You
  1. Implement the Circular Queue
  2. Implement Double-Ended Queue.
4.0 References

[1] http://en.wikipedia.org/wiki/Queue_(data_structure)