Home Authors Posts by Teodor Filimon

Teodor Filimon

Teodor Filimon
11 POSTS 0 COMMENTS
Teo is a software engineer from Romania, he works at Atoss developing a workforce management product. He is a former Google API Guru and Google Desktop 'Hall of Fame' developer.

Developing many Android apps I found that a lot of methods are non-application-specific which means I can extract them into a Utils class which I can then share among various projects. One of my most useful methods is Utils.isEmulator(). This allows me to write into the app behavior which will be optimized for development in case the app is running inside an emulator supplied by an SDK and optimized for end-users in case it’s running on an actual device.

Here’s a use case example. Let’s say you’re building an alarm clock for Android and you want to test the snoozing feature. Let’s say the snooze period has a minimum length of 5 minutes. It will be a waste of time for you to wait 5 minutes every time you hit snooze when you’re actually testing this on the emulator, so why not set it to 1 in this case?

final int minutes = Utils.isEmulator() ? 1 : 5;

While developing the application yourself you’ll find many such use cases. At the moment there is no full proof, recommended way of checking whether you’re running in an emulator or not but the following implementation never failed me:

private static final boolean IS_EMULATOR = android.os.Build.MODEL.endsWith("sdk");

public static boolean isEmulator() {
return IS_EMULATOR;
}

Packaging both behaviors in the same ‘shipment’ (.apk file) is also convenient because it allows you to enable a debug/development mode when you’re helping customers or when you’re just testing functionality on real devices (e.g. simply by forcing IS_EMULATOR to be true in the implementation above in case some button is pressed). In my next process tip, I’ll share another use case for isEmulator() which isn’t application specific and which you can use in every single app you build from now on.

Efficient programming on mobile devices not only saves CPU, but battery as well

Everyone uses ArrayLists these days because they’re flexible. When you iterate over it, it allows you to use the enhanced for loop:

for (MyClass object : myArrayList) {
  ...
}

You use this because it’s simple to understand and easy to use. Did you know it’s consistently 3 times slower than the way you learned to do it in high-school? :)

for (int i=0; i<myArrayList.size(); i++){
  final MyClass object = myArrayList.get(i);
  ...
}

And you can squeeze even more out of this by looping it the other way around. If you check the condition you will access the ArrayList’s size() function every time you increment the counter. But if you loop the other way around you will only read in the beginning and then countdown towards 0, a constant:

for (int i=myArrayList.size()-1; i>-1; i--){
  final MyClass object = myArrayList.get(i);
  ...
}

Don’t forget to write i–. The first time you’ll try this you’ll write i++ out of habit and you won’t know why it crashed .This subject is pretty deep and it involves the JIT (Just In Time compiler) and automatic usage of Iterators behind the scenes as well. When you’re not sure what really matters in terms of speed the best way to figure it out is to write a small test yourself and run it directly on the emulator by measuring time the good old way with System.currentMillis()!

SQLite databases are the recommended way of storing quantitatively important data on the Android operating system. Because of the way user focus is organized on this platform, there are several kinds of contexts databases can be used in, e.g. activities (Activity class), services (Service class) or widgets. The first two inherit the Context abstract class, while context references are passed into the latter, but these, along with optimizations, will be discussed in the next sections.


  1. Introduction

Context is actually an interface to global information about the application environment, the implementation being provided by Android itself. The existence of Context references helps us determine their scope and even what we can do to optimize classes which are constructed using such references [1]. An Activity is, in a large sense, the logical unit of users’ interaction with the phone. Usually, only one Activity can have focus [2]. A Service is similar, but it doesn’t target the UI and is designed for longer-running operations; it’s a long-lived component of an Android application [3].

SQLite is an easy to use database system; there are certain helper classes to aid in database creation and version management, such a class being SQLiteOpenHelper [4]. You have to keep in mind that your app’s users might have different database versions when upgrading to the latest app version, so managing structural changes by implementing the onUpgrade(…) method is recommended.


 

2. Modelling & Creating the Database

Figuring out the best object-oriented model can depend on various factors. For example, you might want to focus on so called real-time usage needs, or you might want a trade-off with security features. Before explaining this choice, we first need to know how the database is created [5]:


  private class DatabaseHelper extends SQLiteOpenHelper {

    DatabaseHelper(Context context) {
      super(context, DB_NAME, null, DATABASE_VERSION);
      mCtx = context;
      res = context.getResources();
    }

    @Override
    public void onCreate(SQLiteDatabase db) {
      Resources r = res;

      // creating the tag and entry tables and inserting a default tag
      db.execSQL(“CREATE TABLE “ + DB_TAG_TABLE + ” (“ + KEY_ROWID
          + ” integer primary key autoincrement, “ + KEY_NAME
          + ” text not null);”);
      db.execSQL(“INSERT INTO “ + DB_TAG_TABLE + ” (“ + KEY_NAME
          + “) VALUES (‘” + r.getString(R.string.tag1) + “‘)”);
      upgrade(db);
    }
  }


As can be seen above, actual SQL code is passed to the execSQL method of the wrapper class. In fact, this snippet of code already contains an important optimization: the general isolation of upgrade operations which can be replicated for future users without being explicitly included in the SQL creation code. Details about the advantages of a custom upgrade method will be discussed in the following sections.

Classes such as DatabaseHelper are usually wrapped in other classes, becoming an inner class of the wrapper. This way, other helper methods can be implemented around the database helper. Before it’s actually used, it’s important to know that an SQLiteDatabase will be doing the hard work. An instance of it can be optained from the mentioned database helper, in an open open method (context use can be optimized) [5]:

  public ToDoDB open() throws SQLException {
    mDbHelper = new DatabaseHelper(mCtx);
    mDb = mDbHelper.getWritableDatabase();
    return this;
  }


Similarly, a closing method is needed – either for releasing resources or for freeing the database up for a lower level handling (more about the latter in the following sections) [5].
  public void close() {
    mDbHelper.close();
  }


But first an object model should be established. Some developers often execute their queries surrounded by the opening and closing methods. Thus, the default state, in that case, is a closed database. In theory, this increases security and modularity, also allowing for a permission system within a single app. This approach might be excusable with sparse accesses, but what is the CPU impact when these accesses occur periodically, or following every user action? Keep in mind that CPU activity translates not only into low response times but also into battery consumption on mobile phones. Therefore, if performance becomes an issue, and it rarely doesn’t, an open state should be considered default. The database can then be opened in the onCreate or onResume events of the activity, and closed in onDestroy. Similar events could be implemented / overridden in services and widgets, but this raises a lifecycle issue. Database instances will be different in these cases and will only live as long as the contexts will, but their cohabitation could cause problems. The singleton pattern can be employed here: if our database wrapper becomes a singleton, it will only have one true instance, forcing references (and not instances) to have a short life. This also eliminates the ultimately redundant overhead of creating parallel instances which do the same thing.
However, a Context needs to be passed to the constructor, and passing short-lived contexts to long-lived singletons wouldn’t be appropriate. This is why instead of using the local context, getApplicationContext() should be called on it; this will return the context of the single, global Application object of the current process. An instance of our own database wrapper would be obtained like this:
  sDbHelper = ToDoDB.getInstance(getApplicationContext());


And the public static getInstance function would look like this (the actual constructor should remain private, as it is a singleton):
  public static final ToDoDB getInstance(Context c) {
    return sInst != null ? sInst : (sInst = new ToDoDB(c).open());
  }


This approach relies on Android to close the actual database, since explicitly closing it, for example, in a main activity, might prevent the app’s service to use it, or other such combinations. This way, the database will be closed only if it isn’t used anymore.


Fig. 1: Non-Singleton vs. Singleton approach

 

  3. Constraints

Some features might have certain constraints. For example, an app might offer the possibility to import a database from an external source. If this happens at a filesystem level, by overwriting files, the database will first have to be closed (released). After copying the files, it must be reopened, so it’s available to the app again [5] (the copy method can be implemented using standard Java):


  public static final void importBackupSD(final Context c) {
    sDbHelper.close();
    try {
      Utils.copy(new File(“/sdcard/Tag-ToDo_data/database_backup”), new File(
          “/data/data/com.android.todo/databases”));
    } catch (Exception e) {
      Utils.showDialog(R.string.notification, R.string.import_fail, c);
    }
    sDbHelper = ToDoDB.getInstance(c);
  }


Databases have, of course, different versions (e.g. when the app adds a feature, it might also need to change or add fields). This can be done in onUpgrade [5]:

    @Override
    public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) {
      // upgrade to db v74 (corresponding to app v1.2.0) or bigger;
      // 4 columns need to be added for entry dates (and other possible
      // future extra options).
      if (oldVersion < 74 && newVersion >= 74) {
        try {
          db.execSQL(“ALTER TABLE “ + DB_ENTRY_TABLE + ” ADD “
              + KEY_EXTRA_OPTIONS + ” INTEGER”);
        } catch (Exception e) {
          // if we are here, it means there has been a downgrade and
          // then an upgrade, we don’t need to delete the columns, but
          // we need to prevent an actual exception
        }

        try {
          db.execSQL(“ALTER TABLE “ + DB_ENTRY_TABLE + ” ADD “ + KEY_DUE_YEAR
              + ” INTEGER”);
        } catch (Exception e) {
        }

        try {
          db.execSQL(“ALTER TABLE “ + DB_ENTRY_TABLE + ” ADD “ + KEY_DUE_MONTH
              + ” INTEGER”);
        } catch (Exception e) {
        }

        try {
          db.execSQL(“ALTER TABLE “ + DB_ENTRY_TABLE + ” ADD “ + KEY_DUE_DATE
              + ” INTEGER”);
        } catch (Exception e) {
        }
      }

      // upgrade to db v75 (corresponding to app v1.3.0) or bigger;
      // a column needs to be added for written notes
      if (oldVersion < 75 && newVersion >= 75) {
        try {
          db.execSQL(“ALTER TABLE “ + DB_ENTRY_TABLE + ” ADD “
              + KEY_WRITTEN_NOTE + ” TEXT”);
        } catch (Exception e) {
        }
      }
    }


However, certain SQLite exceptions might occur. For example, if a database with an older version has been imported, and the app, of a newer version, queries for nonexistent field, there will be a force-close (the app will crash). The code above is useful in this situation as well because it allows us to easily repair the database by forcing missing subsequent upgrades until the current version is reached; onUpgrade being a method, not just an event, can be called programatically and, furthermore, its signature allows for successive and independent calls. These calls can be grouped in a method implemented in our DatabaseHelper class:

    public void upgrade(SQLiteDatabase db){
      onUpgrade(db, 73, 74);
      onUpgrade(db, 74, 75);
      onUpgrade(db, 75, 76);
      onUpgrade(db, 76, 78);
      onUpgrade(db, 78, 79);
      onUpgrade(db, 81, 82);
      onUpgrade(db, 85, 86);
      onUpgrade(db, 91, 92);
    }


Repairing the database can be done in 2 ways, one simpler and one more complicated. The simple way involves calling the upgrade method and ensuring it is called only once upon the exception, so as not to enter an infinite loop in case the DB version isn’t the cause. This way, the database self-repairs based on its own design. A more full-proof way of handling such exceptions could be generating and executing SQL commands automatically based on the exception text, by extracting the names of problematic fields or tables and adding them to the data with a default value, but the extra effort in this second case might not be worth it.

  4. Flags

Using flags can be a straightforward design decision, but their advantages are worthwhile to mention. Flags usually have a passive nature, meaning that on change they don’t execute a particular action. They can be a very useful tool to customize user interaction by affecting query returns (e.g. to-do list users might want to sort their tasks, some alphabetically, others by priority, etc., which would be done by optionally appending an ‘ORDER BY’ clause to the SQL code). Setters can be implemented in our own wrapper while its inner methods can access it directly.


5. Events & Synchronization

Analyzing the mentioned ideas from the synchronization point of view reveals a few differences as well. When the default state is closed, you can consider the system synchronous (even if minimally) because specific methods need to be called before and after accessing the database (e.g. open and close). In other words, the application can’t unconditionally access at will, no matter the calling location, unless it opens the database first. With an open default state designed after the singleton pattern, this system is basically asynchronous, eliminating the necessity of calling the closing method and allowing any type of access from any location.

Although remotely related to databases from the documentation point of view, the connection with the UI and the way events propagate are very important, especially since some can be used as so called control signals. Events flow in a bottom up manner, and can be consumed (stopped) or allowed to continue along the propagation chain through the boolean return value, as seen in the following diagram:



Fig. 2: Event propagation


The way events can be used as control signals is translating user action into different use for the same UI elements. For example, a LinearLayout could be used to list all the tasks in a to-do list, but the same layout would be the best choice for enabling a task choice (e.g. moving a task under another task). Another example would be using a Spinner dialog for a similar double purpose: choosing tags from a list to show their content or choosing them to specify them as new hosts for previously selected tasks.




Fig. 3: Control signal example


Where preconditions and postconditions are concerned, there aren’t any except the ones imposed by design in the actual access methods implemented by the developer in the wrapper. Also, there aren’t any data type compatibility issues if parameters are properly included in the SQL or if Android’s special classes are used [5]:


 

  final ContentValues args = new ContentValues();
              args.put(KEY_NOTE_IS_AUDIO, 1);
              db.update(DB_ENTRY_TABLE, args, KEY_NAME + ” = ‘” + taskName
                  + “‘”, null);


This example updates a field which satisfies a certain condition with a new integer value.


6. Optimizations

One of the possible optimizations is using an abstract mother class for different kinds of tailored database wrappers. For example, in an alarm listener it might not make any sense to instantiate the entire wrapper with all its feaures; maybe a stripped down version would work better.

Another optimization is encoding multiple values into a single one. This can be done not only for security reasons, but for decreasing the amount of stored data (extra effort will be transfered to CPU). For example, a date – composed of year, month and day – can be encoded as an integer value:


 

  public int getDueDate(String task) {
    final Cursor entry = mDb.query(DB_ENTRY_TABLE, new String[] { KEY_ROWID,
        KEY_NAME, KEY_DUE_YEAR, KEY_DUE_MONTH, KEY_DUE_DATE }, KEY_NAME
        + ” = ‘” + task + “‘”, null, null, null, null);
    // for now, assuming we have a task named like this
    entry.moveToFirst();
    final int e = 372 * entry.getInt(entry.getColumnIndex(KEY_DUE_YEAR)) + 31
        * entry.getInt(entry.getColumnIndex(KEY_DUE_MONTH))
        + entry.getInt(entry.getColumnIndex(KEY_DUE_DATE));
    entry.close();
    return e;
  }


You can then decode the needed part using the DIV or MOD operators (/ and % in Java). 


7. Conclusion

SQLite databases on Android are a modern way of managing data on phones. Because of the multiple purposes mobile devices have, wrapper and helper classes have been and can be created as part of the platform to ensure high quality data management. Also, the Android operating system is constantly evolving providing compiler and performance optimizations that improve database use even further.


References

[1] Android: Context [WWW] http://developer.android.com/reference/android/content/Context.html

[2] Android: Activity [WWW] http://developer.android.com/reference/android/app/Activity.html

[3] Android: Service [WWW] http://developer.android.com/reference/android/app/Service.html
[4] Android: SQLiteOpenHelper [WWW] http://developer.android.com/reference/android/database/sqlite/SQLiteOpenHelper.html

[5] Filimon T.: Tag-ToDo-List [APP] http://code.google.com/p/tag-todo-list

    858

    1. Definition & History

    Genetic algorithms are a particular class of evolutionary algorithms and are an iterative way of finding perfect or approximate solutions. Some of the concepts involved are inspired from biology, hence the ’genetic’ attribute: crossover, mutation, cloning, inheritance. Some of these processes occur in nature as well (e.g. crossover or mutation), others, like cloning, don’t. Genetic algorithms work with populations, formed by individuals, each generally representing a solution to a given problem. By the ’survival of the fittest’ principle, solutions become better until an ending condition is reached: the solutions are good enough, the maximum number of generations has been reached, etc.

     

    Studies involving genetic algorithms go back to the ’50s, but they gained popularity later. In the ’80s, General Electric used them to create an industrial mainframe-based toolkit, other similar products beginning to appear as well. Other uses included pattern recognition and games. Genetic algorithms are also useful in theoretical contexts: the knapsack problem, the traveling salesman problem, graph coloring etc.

    2. Representation

    Theoretically, a greater number of generations means better solutions. Each generation creates a new population, of the same size, from the existing one. Ideally, individuals should be represented in the simplest ways possible; elements of this representation should be independent, because they will pass from an individual to another and because of the way they are interpreted. For example, creating a path through a graph might be represented as a sequence of letters (a g b …), but it would be hard to quantify differences between nodes. Representing it as an array of bits (1 0 0 1 0 …) is better because it exhaustively describes the solution as part of a solution space. Of course, it also depends on what the problem is; for example, permutation problems should allow for greater flexibility in representation.

    3. Selection & Fitness

    Since we work on populations, an initial one must be generated, usually in a random way. Population size shouldn’t be small for obvious reasons, but shouldn’t be too big either, because of performance constraints. A good balance between the number of generations and the size of the population is recommended.

    As mentioned, the next generation always comes from the existing one, and, for this, the best individuals should be selected for reproduction to insure solutions become as precise as possible. The way individuals ‘reproduce’ will be discussed in the next section; before that it is necessary to implement a fitness function (a.k.a. evaluation function) to be able to compare individuals. The so called naive functions (cost, weight, value, etc.), dependent on the problem definition, can work satisfyingly. However, there is a risk that the solutions might converge to a local optimum too quickly and not improve beyond a certain unsatisfying point. There are ways to prevent this, the simplest of which is the rare inclusion of poor individuals in the next generation; this ensures diversity on one hand, and, on the other, can salvage good genes that might have been lost otherwise. There is also the matter time consumption, because, depending on the parameters, evaluation can become the most intensive operation.

    4. Reproduction techniques

    This article will only focus on the most widely used techniques, with standard parameters. The basis of genetic algorithms is crossover. Let’s assume we choose 2 parents from the fitness selection:

    Parent 1 0110101011
    Parent 2 1110111101

    Having these, we then choose a random crossover point, which means the resulting individual will have bits from one parent up to that point and from the other parent starting from it. Assuming we have generated p=3 as a random crossover point, the result would look like this:

    Result 0110111101

    It is also worth mentioning that there are several optimizations that can be made: multiple crossover points between the same parents, deliberate choice of the crossover point or an increased number of parents, but these would obviously increase the complexity of the algorithm as well.

    Another technique is cloning. Even if this does not occur naturally, it’s a good way to improve performance by ensuring that early individuals with remarkable fitness aren’t completely lost in the process of reproduction by being totally replaced by the next generation. The cloning rate should be very small though, so only the best individuals would benefit, otherwise crossover as the main solution search algorithm wouldn’t receive the proper influence in the final result.

     

    On the other hand, mutation does occur in nature. Although it happens quite often, it generally doesn’t constitute the main or defining trait of an individual, and that stands in the algorithm as well. Because of the need for straightforward interpretation of individuals (and the solutions they represent) the mutation rate should be small as well and applied randomly, by changing only 1 bit within a solution.

    5. Conclusion

    Genetic algorithms are easier to understand than what a first impression might suggest. They are being used in a wide number of applications today, from artificial intelligence to search or industry and from simulation of life-form behavior to scheduling or the production of lenses. Even if some of these are still incipient, research promises to improve and popularize genetic algorithms further in the future.

    Resources:

    • Zbigniew Michalewicz, How to Solve It: Modern Heuristics, 2002
    • www.obitko.com/tutorials/genetic-algorithms
    • Colin R. Reeves-Editor, Modern Heuristic Techniques for Combinatorial Problems, 1993
    • http://xkcd.com/534/

    1025

    Mass Effect 2 is the sequel to a game that received the 2007 game of the year award from The New York Times, best RPG awards from IGN, Yahoo Games and the Academy of Interactive Arts & Sciences, and many other awards from the gaming community. I would say that, at least in this area, the Mass Effect lore is comparable to that of Star Wars, but, unlike Star Wars events which happened ‘a long time ago’, Mass Effect is set in the future, the year 2183 to be exact, when destiny reveals its great plans for Commander Sheppard in a universe full of incredible planets and species.

     

    Mass Effect 2 continues after Sheppard faced the extra-galactic threat, and, without spoiling, i’ll tell you the scenario is built in such a way that you’re allowed to benefit from the character you developed in Mass Effect 1 by importing the saves, but also ensuring an equitable beginning for everyone. You’ll also find that decisions you made in the first part will impact what happens in the sequel, sometimes quite seriously, and especially where ex or future team members’ evolutions are concerned. The classes are the same as in ME1 (Adept, Infiltrator, Vanguard, Sentinel, Engineer and Soldier) and you’ll be given the opportunity to change your class even if you imported the saves.

     

    You can, of course, become ‘good’ or ‘bad’ by increasing your paragon or renegade levels (which don’t necessarily exclude each other). This happens automatically depending on what decisions you make and how you navigate through conversations. The innovation here is that being paragon or renegade not only unlocks special conversations options (e.g. persuasion or intimidation) but now also allows you to take instant action as events unfold, provided you notice the window of opportunity and use it before it disappears. Additionally, the level-up screen is changed from ME1 and it may seem a bit counter-intuitive at first, but it’s actually a nice and proportional representation of your progress. Each ability requires more and more points for you to be able to reach its next level, so, at times, if you want to improve a specific ability before the others you might have some extra points you won’t be able to spend. It’s up to you to figure out a balance between being good at everything and being good at something in particular, and the way you do this should also take into account what class you’ve chosen. In case you’re not satisfied with how you distributed your points, at some moment in the game you’ll have the possibility to invest in a specific research project that redistributes them, in a lab on Normandy (your ship).

     

    Fig. 1 – The level-up screen

     

    Talking about research projects, we get to the economic aspect of the game. There are, of course, credits, which allow you to do most things, but as a new element ME2 introduces the ships ability to mine for certain metals or materials (Palladium, Iridium, Platinum and the rare Element-Zero) through specialized probes which can be launched by the ship when situated near planets. Scanning for these resources is done manually, from orbit, and sometimes anomalies might be discovered, even leading to special missions.

     

    .

    Fig. 2 – Scanning a planet for resources

     

    Various quantities of these resources can be spent to unlock research projects, which improve your fighting/tech equipment or your ship. Even choosing what research to invest in will have repercussions later in the game, so think strategically :)

     

    As in ME1, you gain your team members’ loyalty by accomplishing missions usually related to their status prior to meeting you. Depending on how you fight, what decisions you make and which missions you accomplish, you’ll receive achievements (as medals) which you can view in the captain’s cabin – your own private place on the ship where you can do all kinds of leisurely stuff. This brings me to the improved sense of realism the game has. For example, a subtle detail but with a very immediate feel to it is seeing coffee machines on tables. Even the mini-games used to hack datapads or gain access to various rooms are surprisingly realistic towards their purpose: to bypass security you work on an actual circuit and to hack code you match actual snippets of code. There are other mini-games and puzzles spread throughout the game, some tied to non-essential assignments but some vital to the mission.

     

    Fig. 3 – Bypassing security

     

    Fig. 4 – Hacking code

     

    The combat system is kind of new in ME2, although you can use powers in the same way: you now need thermal clips instead of your weapon just cooling down on its own – the heat generated when you fire your weapon goes into those clips. This is another realistic part because sometimes you might run out of them in mid-battle, so you’ll have to spot some and avoid enemy fire until you get there; or you can change weapons. When you get cover, if you’re crouched and your cover isn’t very tall you can jump over it taking advantage of a pause in enemy fire. Jumping to the unrealistic part, you can also revive your teammates by using the Unity power, but that’s no reason to complain :) However, it can be fun to measure up alone against your opponents. Controlling your squad’s position can be done individually now, so you can either put them out of harm’s way, use them as a vanguard or place them strategically depending on how you want to engage your enemy (sometimes luring them out of their original location is a good tactic).

     

    Fig. 5 – Evil robots from outer space? They’ll never know what hit them…

     

    The DLC (downloadable content) system is pretty interesting itself. Some DLCs are available as add-ons and offer extra missions:

    ● Zaeed Massani offers a new character with the same name, adding to the game missions and stories

    ● Firewalker gives you an assault vehicle that allows you to hover over the battlefield, and it includes a few missions for you test it out

    These are available as part of the Cerberus Network, which also includes extra weapons and assault gear. The Cerberus Network is a new kind of DLC portal that allows you to download its content for free as long as you’re subscribed to it. Retail purchasers should already have a network card with the Cerberus redeem code in it, otherwise (e.g. buying used games) redeem codes can be purchased separately as well. There is another DLC called Kasumi – Stolen Memory which appeared in April. It features a new character (Kasumi, an enigmatic thief) and a loyalty mission which is pretty cool, but short. It’s not available as Cerberus DLC, but separately, and costs 560 Bioware points (you can buy 800 points with 10$). What this game seriously lacks though is multiplayer functionality. Even if it will be added through an add-on later, it won’t spark all the hype a built-in one would have. Maybe this is something they’ll consider for the 3rd installment.

     

    As you progress you’ll notice which missions look ‘more final’ than others. Before you embark on the journey towards the final battle be sure you invest into research as much as possible, recruit all the characters, make them loyal and explore all the planets, otherwise outcomes of certain situations you’ll be put in will ‘sting’, even if you’ll be able to go on in the game. In any case, keep your savegames, you’re going to need them :) . Once you’re done with the game, keeping your savegames will also be good for Mass Effect 3, which will probably allow for importing as well. If you’ve played ME 1 you’ll know that you’ll grow attached to some of your team members and the same thing will happen now. If you haven’t played ME 1, play it, you’ve got some catching up to do :)

     

    Characters are more profound than filling positions on the Normandy reveals about them. Not only are their personalities thought-through, but the art & animation department, as well as the voice cast, are incredible. I like to see who voices the characters when i finish the game as the credits roll so i won’t spoil it for you, but i’ll say there are a lot of celebrities among the cast. Some you will have recognized even before you finish the game. Speaking about personalities, when you revisit important planets try to choose different squads – there are some meaningful places where you can talk to them and they share insights and opinions. For example, some of the landscapes in this game, not just the pre-rendered ones, are simply spectacular.

     

    Even if the year isn’t over, Mass Effect 2 has already received its own awards: Best RPG of E3 from GamePro, IGN, GameSpy and others. Bioware, one of the industry leaders, is also working on what is probably one of the most anticipated games: Star Wars: The Old Republic, a ground-breaking MMORPG. Given all this history and the promise of the future itself, Mass Effect 3 should prove one of the best games ever, and, furthermore, EA and Bioware will have reasons to support this franchise for a long time to come.

     

    Resources:

    - some of the images are from Mass Effect Wiki and YouTube

    - Mass Effect official site

      830

      Although the subject has been touched in scientific literature, even if only remotely or concerning specific applications, there isn’t really a detailed study of linguistic signatures or a holistic view on its practicality yet. In short, from given texts certain characteristics can be extracted. Together they form a ‘signature’. Computing these characteristics can be done by determining the frequency of all or certain words, their position in the sentence or phrase, their position compared to other predefined or automatically detected keywords and other methods. Also, visualization techniques pose an interesting research challenge – for example, creating differentiations based on language or the purpose of the representation.

       

      Such signatures could have many uses, commercial or otherwise. Here are a few:
      • profiling potential or current employees – e.g. useful on employment through the analysis of the cover letter (in this case the employer might be interested in the presence or absence of deception, or in certain personality traits)
      • forensics or other types of investigations [1] – e.g. comparison of multiple signatures to see if the texts they originated from have the same author, detection of hidden purpose in messages, etc.
      • automatic classification of articles / documentation based on the signature of technical, scientific domains
      • spam filters – by analyzing the perceived relevance of the email content
      Future research has the potential of being very successful. For example, it looks like humans can’t normally detect deception at a rate higher than approximately 50% (in the case of trained persons) [2] which basically means this area is left to chance. A lot of methods (with a strong psychological base) have surfaced lately but the efficiency of most is still being assessed in current studies. One of the many challenges is, for example, establishing how to evaluate the influence of factors such as the culture or social category of authors, or even which of these intrinsic or extrinsic traits are relevant to a given context.
      Visual representation of such signatures, as i said earlier, can be very intuitive. Here is an example:
      Fig. 1 – Data visualization generated by the Textour app [3]
      Apps designed for text analysis purposes could also have data mining and neural abilities in the future – once they have a training set with their extracted signatures, they could be used to make interpretations on the nature of the author or the message. Current applications built for advanced profiling of authors more or less rely on additional manual preprocessing of the input text (e.g. syntactic or morphologic analysis) and on dictionaries which store preset meaning and training results. Fuzzy logic is another instrument that is beginning to take its rightful place in this area.
      In a way, since texts are more often typed on a keyboard rather than handwritten these days, expertize such as the one described in this article may have a very important role. Expertize similar to this can also be used for therapeutic purposes where communicational disabilities are concerned. Even if we all use the same grammar and our native language seems to be the same for everyone, the way we communicate still remains particular to each individual. Carole Chaski [1] compared this to DNA, 98% of which is shared among all of us – the remainder of 2% is still enough to provide the diversity we see.

      Resources

      [3] Textour – an app which performs text analysis

        793

        People, as individuals, aren’t blogging as much as they did once. Is Twitter to blame for this? Companies still communicate important things through their blogs but lately there has been a Twitter adoption avalanche, even by companies. Sure, Twitter is growing incredibly fast – but what does this mean for other services, like, for example, Blogger (which recently turned 10 years old)? Is micro-blogging stealing buzz away from normal blogging a general trend? Thoughts that are normally too ‘unimportant’ to find their way to the blogging interface find their way to Twitter a lot more easily, especially with an explosion of various Twitter clients. This also taps into the power of mobile, since it’s probably easier to tweet on a phone than to blog on a phone.

         

        Another change is the fact that Digg is getting less exposure, or at least that is the general perception. Once, Digg links were flooding chat windows, but a lot of that is now ‘stolen’ by Facebook and Google Reader which implemented ‘Like’ and sharing features. These 2 also attempt to ‘digg’ into Twitter, by emphasizing with their UI experience a big text box in which you can write your ‘status’. Sometimes the plethora of features is bothering though, maybe evidence of attempts to imitate what’s working for competitors… Hybrid services are also beginning to appear (e.g. Tumblr) and, with more and more social networks and communication platforms, integration is gaining ground as well (e.g. FriendFeed, Brizzly).

         

        If we were to make an evaluation of the future of services strictly based on intrinsically added value, Google seems to always be the best in search, despite the fact that new engines like Bing or Wolfram Alpha gain popularity as well. Microsoft always seems to be the best in offline operating systems, at least based on usage figures, just as Google is already tapping into the online system area, which isn’t seriously implemented by anyone at this point. A lot of entities want Twitter to introduce monetization – in this direction, interactive solutions, such as tweeting through Amazon Associates, already exist. Facebook already appears to have won the social networking monopoly; furthermore it will start tweaking its privacy system to a more suitable, modern behavior. Google Reader is steadily following its own trend to become the top feed reader, but it’s actually the sharing platform that’s one of the most appreciated features. However, it’s not only the technological configuration that shows dynamism, but also the financial part, the possibility that some of the actors will acquire others being very high, especially in an era of ‘startups’ =]

        Until then, a fact is certain – an international language does exist, and it contains at least these 3 symbols:

          775

          Everyone uses them

          Smileys are used so often nowadays that some suggest they should have their own keys on the keyboard. But exactly because we use them so often, how often do we think about how well they fit into our typing pattern and how we can type them fast and easy. The cumulated strain you can save by using simpler ways to insert them in chats and documents will amaze you. :) For example, in the short time that passed since you began reading this article, over 700 people made a happy face on Twitter.

          Studying smileys in detail. Somebody has to do it :)

          There are many kinds of popular smileys out there but almost all of them involve pressing the SHIFT key in addition to the 2 or 3 keys which actually make the smiley show up. In ‘complex’ but classic smileys like :-) or :-( keys are pressed 5 times. smileys should be easier to type! :) There are various forms that are detailed below. It’s interesting to note that a smiley is interpreted as such because of the ‘eyes’ symbol, whichever it is, the actual meaning of the smiley usually being given by the last character which represents the emotion (btw the word emoticon comes from emotion + icon). To keep things simple but as useful as possible (and considering the “make the common case fast” rule) let’s take a look at the pure smiley (no winks or sad faces :) since it’s by far the most used.

          Smiley No. of key presses
          :-) 5 (2+1+2)
          :) 3 (2+1)
          :] 3 (2+1)
          =) 3 (1+2)

          We see that the number of key presses isn’t that relevant because some keys, like SHIFT, can be pressed for shorter or longer periods. That means counting and adding the number of occupied fingers for each character is a better measure of effort.

          Smiley Finger effort
          :-) 5 (2+1+2)
          :) 4 (2+2)
          :] 3 (2+1)
          =) 3 (1+2)

          Can we reduce the finger effort even more? Mathematically we can’t reduce it to the action of only one finger on default keyboards because it would mean that there actually exists a single key for a smiley which isn’t the case (yet :) . That leaves us looking for a 2 finger / 2 key presses solution, and interestingly enough it exists and is used but by very few people: =] . No SHIFT key required. I’ve seen it used for the first time on Twitter (unfortunately i can’t remember who it was) and made it my own as well – you may be wondering why i use :) in this article, but more about this in the next section. What’s even more interesting is that, given enough practice, you can either:

          • use 2 fingers as one to press the 2 keys (= and ]) at the same time
          • or even use only one finger to press them both because the keys are physically next to each other.

          I won’t be referring to ‘vertical’ smileys like o_O because they’re harder to write to begin with. However there is one that makes use of an interesting concept: double-pressing a key just like you would double-click an icon. I’m talking about ^^. This introduces time as a measure of effort as well, because the closer keys are the faster you can press them. And if they happen to be the same key that’s even better. :) So let’s take covered keyboard distance between the main keys into consideration to have a truly complete view of smileys. :) We’ll ignore SHIFT because we usually use the same finger to press it.

          Smiley Finger effort Distance Total
          :-) 5 (2+1+2) 3 (2+1) 8
          :) 4 (2+2) 2 (2) 6
          :] 3 (2+1) 2 (2) 5
          =) 3 (1+2) 2 (2) 5
          =] 2 (1+1) 1 (1) 3
          ^^ 4 (2+2) 0 (0) 4

          Smiley standards and profiles

          The classic smiley :-) is probably most used in presentations and formal situations. People who use them in these situations don’t always make this choice consciously but it is a fact that a smiley with a ‘nose’ is easier to distinguish. You see it in regular chats too, but not so often. Let’s make an exercise: isn’t it true that your chat contacts who use the smiley more often use it without a ‘nose’?

          IM clients nowadays offer ways to insert smileys simply by using the mouse, but how easy is it really? It means taking one hand from the keyboard and moving it to the mouse or touch-pad. I use the aforementioned =] in informal situations, like chat or tweets, because it’s easy and fast to type. You can afford doing this because friends will get used to your typing habits, unlike articles, where, if i were to use =] from the beginning you would have thought i’m a weirdo (which may still be the case, because i’ll start using it now =D).

          Some blogging platforms recognize smileys in blog posts and comments and replace the text with an actual image. That’s a great thing standard wise because it allows for easier recognition by the reader – at this point the only problem is how easily do we type what needs to be recognized? If we were to take this a step further, let’s think about how we play games. When we go through the settings we also check out the keyboard shortcuts. What if we could have account specific characters that we would like to be preprocessed and even have the ability to upload our own images to be inserted? Or even better, a ‘dynamic’ image which could be, for example, a screenshot? Wouldn’t it be cool to have the option to double press a key (that doesn’t appear twice often, like the comma key) and a screenshot is auto-attached to the email you’re writing or auto-inserted in the blog post you’re writing. Or auto-uploaded to the Wave you’re editing or to the tweet you’re tweeting. I could get used to this and it could be extended to all kinds of fun stuff.

          On a sidenote, my personal opinion is that smileys should be considered punctuation marks. They express feelings and pauses much better than, for example, a simple exclamation mark and have the advantage that they’re grammatically correct wherever you put them =D.

          Offline solutions (for the future)

          Even though there could be various solutions built as apps on top of an OS, on the long term this wouldn’t really be efficient, unless the keyboard interpretation remains at an OS level. Security should also be considered since ‘intercepting’ keyboard events may either be considered dangerous by firewalls or similar security apps even if they aren’t or might actually be a risk by association with key loggers. Much of this depends on the OS of course, but a general idea for a future ‘soft’ introduction of smileys on current keyboard layouts should mean an event isn’t intercepted but its implementation at OS level enriched (e.g. double-pressing the comma key shows a smiley whether your in a browser, text editor or something else).

          ‘Hard’ introduction of smileys has been popularized, more or less in a concrete way, but i have yet to discover a company that actually makes keyboards with smiley keys on them. The only such pictures i saw on the web are photoshoped =]. But every good project starts with concept art =], the only question is: do we really want smileys on our keyboards or not?

          Resources

           

           

            863

            1.Context and technological basis

            In my previous articles i’ve treated improvements that can be brought not only to algorithms – and generally apps – but also to the process of their development. However, the degree to which these improvements are implemented today isn’t very high, so research topics arise around how to best help users and developers alike, based on the treated technologies.

            This is why the structure of this article is in a way reversed because we’ll start from tangible aspects (or ones that we would like to be that way) to concepts and abstract directions.

            One of the things we demand from programs is adapting to various user contexts, which implies dynamism, but when something adapts, we also expect it to do so automatically. If we associate the respective practical techniques, dynamic allocation and reflection under their various forms, the simplest, yet the most powerful purpose we can think of is connected to the ‘most universal’ unit of action: the algorithm. So this would translate to the automatic loading of the best performing algorithms for a certain purpose; in other words, given a sufficient amount of data, the program should act efficiently on its own, taking as much burden as possible off the user.
            2.Possible applications

            A purely software application, excepting the classic example of sorting, is decision. Decision making is one of those things that can always be improved, for example, in expert systems or games. Benefits can reach hardware and even more general physical applications by passing over the IT barrier. Exploring this outward flow of evolution, some of the targeted fields are:

            • Storing – with both of its major exponents: hard-disk and memory
            • Surface processing – fire extinguishing, etc.
            • Navigation – GPS, etc.
            • Health-care – imagistic investigations
            The process of reaching these, including through development specific improvements, is detailed in the following sections.
            3.Research directions

            A similar, barrier based, approach can be used on research directions by separating the entities which participate in or are affected by the specific scope. For example, a client is served by a server, and, as with any service, resources are consumed. These 3 elements (servers, clients and resources) can form the basis for a unitary perspective on what issues should be tackled in the future.

            3.1.Client-side – Server-side equilibrium

            How much should be client-side and how much server-side? It’s obvious that the trend is to move as much as possible online, so one view would state that the web browser is all we need. However, not everything is possible yet within a browser, but the creation of Javascript V8 and HTML 5 is working to that end. Another thing happens though as a consequence: the web browser is getting more powerful, which can be a pattern for any client appserver relation. This also suggests that we can apply the ‘barrier approach’ even deeper into this research model because:

            • servers rely on client apps to show their performance to users (without clients, servers are useless)
            • client apps rely on operating systems to function (without operating systems, clients apps can’t function)
            As we progress into the ‘server-side’ era, dependencies will go the opposite way as well (the operating system is nothing without a browser, and the browser is nothing without the web). This level of barriers, though, is bound to get blurry when Google Chrome OS will be released.
            As a sidenote, it’s interesting to observe that, while processing power available to the wide public increased, offline applications developed to take advantage of that power; then, as it braked around 3 GHz, online suites start to appear. This isn’t a “cause and effect” association but just a chronological note, as the absolute CPU power kept increasing through multiple cores, GPU, etc. A cause for such an association might simply be the fact that a lot is happening in a short time, a known IT characteristic, everything in this area being exponential. Thus, the speed with which the client-server equilibrium is changing can be an interesting subject of study as it can have an impact not only on software technology but on hardware industry as well, in areas ranging from economics to new standards and practices.3.2.Preloading – Cloud computing equilibrium

            This is different because it deals with data availability in the confines of the existing structure, described in the previous section. Streaming video is the best example: much of it, if not all, is loaded before you actually watch it. But what if the user pauses the video? or doesn’t even watch it to the end? A certain quantity of resources will then be wasted by the server side, boiling everything down to user experience vs. available resources (bandwidth, processing, etc.). It’s important, though, that this ratio can be measured so the service provider can choose what fits bets. Another, similar, example is online PDF viewing. Every page is (pre)loaded individually, usually as an image, and the same questions persist: Does the user scroll down through the whole document? Should the following pages be preloaded? If so, how many?

            If we were to remain on the subject of streaming, let’s take a look at an interesting design decision observable in YouTube’s HTML 5 version: the related videos thumbnails will be constructed using the <video> tag, the same way as the main video, so when the user will move the mouse over, these thumbnails will start playing right there, in their tiny assigned space. This is an excellent usability feature, but what will be the impact on server resources? The first answer that comes to mind is a negative impact because, excepting the main video, all these other small videos will have to be streamed. But an interesting perspective tells us that once users have more previews accessible at discretion, they will be able to navigate where they want faster and more directly, without watching videos that aren’t really interesting to them but which consume resources anyway. The result fits the sought equilibrium in a better way than the current state of things because it allows for much of the decision to be made more intuitively, through mouse moves, instead of complicated clicks and extra windows or tabs :) Of course, there can be many more contributing factors to this equilibrium, like size optimizations or other specifics, but we won’t get into such technical details; i just wanted to reveal a pattern in design evolution.
            However, not everyone is the same – we all use websites, and generally web services, in many different ways, which can be a problem for service providers when they want to please as many users as possible. But what if they deliver a customized version of the ‘equilibrium’ to every user?     This can be done with data mining, or even simple statistics, on each particular usage. For example:
            • it’s been determined that Michael watches 85% of his videos entirely;
            • John watches to the end of about 50% of the videos.
            So when these users hit the pause button, what if we could preload 85% of the remaining part for Michael and 50% for John? It’s a direct way to correlate individual usage with resource consumption. Again, these blunt figures might not be feasible, but the revealed rationale can lead to a substantial server-side optimization.

            3.3.Processing / Workload distribution

            This deals with the consequences of setting certain configurations for the so called equilibriums. Because processing also means workload, the needed resources and their quantities should be identified. Referring strictly to IT related issues, they translate into:
            • Energy cost distribution – equipment needs electrical energy to run. No wonder profile companies are green energy pioneers;
            • Wear distribution – there is a certain lifetime for every piece of equipment. When it stops working it has to be replaced (which is another cost), so data loss and degradation of service have to be prevented.
            At a first glance it may seem these points refer to the server-side, but they apply in exactly the same way to clients too. The workload distribution can be described through mathematical models which may differ from an app category to another. For example:
            • games: storage and workload are mostly kept offline – the server is just a platform (not without workload though :)
            • streaming: storage is kept online but the content is constructed offline
            • converters: there are many sites that convert files to various media or office formats – the storage is offline but the intensive workload is done online
            We can quickly observe 2 things: the client can’t avoid its share of contribution and storing data online increases the amount of work to be done because it always has to be delivered offline. In applications which will combine storage and processing online, some resources will be saved because the pre-processing delivery stays on one side (the server).

            4.Objectives

            There’s also the possibility of mapping this vision on other entities, not only clients and servers per-say. Both the ‘client’ and ‘server’ can be offline (a developer served by an IDE) or both can be online (social API providers, in-browser instant messaging, etc.). Assisting developers is a domain of interest because it enables faster development and more efficient implementation on user machines. For example, my bachelor thesis dealt with automatic comparisons of sorting algorithms; the results can be used to determine which algorithm is the best for a certain array size. Such an optimization can be made at runtime, by adding a size check in the deployed framework methods, but even that extra size check decreases performance. So why not help the developer in the implementation phase by creating plugins or extensions for the IDE to automatically detect the array size and choose the best sorting algorithm to begin with?

            The final objective of such research could be “decision modules”, which can be software and hardware alike. Think about what it would be like to have such autosuficient, portable and interchangeable modules which you can add, replace and improve separately either in a program (extensions, plugins, filters, assistants, etc.) or physically in your computer (dedicated chips, customized videoboards, etc.). The tangible target is, in the end, minimum cost and maximum response speed on computation and communication devices.

            5.Science Fiction

            Not only when implementing technological progress but also when contemplating future challenges, sci-fi’s role should be reiterated.

            Figure 1. Data and Lore (Star Trek)
            Fig. 1 reminds of Star Trek emotion chips which allowed androids to experience human feelings, or, at least, act accordingly. This inspires a specific application of the decision modules mentioned in the previous section: activity profiles which mirror the user ‘attitude’ and emphasize his individual needs. There’s a subtle nuance to this that Rama-Kandra, a program from the Matrix Revolutions, defines perfectly:
                Neo: I just have never…

                Rama-Kandra: …heard a program speak of love?

                Neo: It’s a… human emotion.

                Rama-Kandra: No, it is a word. What matters is the connection the word implies. I see that you are in love. Can you tell me what you would give to hold on to that connection?

                Neo: Anything.

                Rama-Kandra: Then perhaps the reason you’re here is not so different from the reason I’m here.

            Most of the times huge leaps in usability don’t need complicated AI-like investments as long as optimization opportunities are given enough importance and popularity.

            6.Conclusions

            Continuous technological improvements are undeniable at this point in history. As growth is becoming exponential and the world population is always increasing, the priorities and patterns of expansion should be defined at a global level in the interest of all parties involved: companies as commercial entities, developers as the driving force of the industry and users which essentially represent everyone.

            Resources

             

              873

              millennium we notice emerging trends in the process of developing software applications, with the purpose of improving computational efficiency (maximizing response speed, minimizing electricity consumption). The diversification of interaction paradigms, not only between applications and users, but between development environments and programmers as well, is taking proportion in this technological period. .NET, sustained by new iterations of Windows for the PC and Windows Mobile, and Java, sustained by Linux (and Android, which is based on it), are constantly expanding as platforms. One can also see a similar parallelism between Mac OS and iPhone OS. These combinations between systems and programming languages define new practices of efficiency, which, along with existent ones, can be generalized in a global vision.

              In the given conditions, a unitary programming strategy is needed, as a software engineering standard which should consistently treat aspects that can be optimized, in the context of increasing pressure from the user community, eager of maximum performance with minimum costs  and improved effects on the environment. Furthermore, we can argue the necessity of real time solutions for decisional problems which can appear during implementation.

              The purpose of this article is to study the impact of client-side apps on local computation resources and to reveal optimization techniques which are usable on multiple programming platforms.

               

              Algorithmic optimization

              One of the aspects that can be optimized within a program is algorithm implementation. Its subject usually consists of data structures, therefore a detailed knowledge of them is needed, including the difference between primitive and wrapper types (e.g.int vs. Integer in Java), selection mechanisms (how the compiler sees these types) and the possibility of an object being immutable. Once this basis is established, certain structural profiles define themselves – for example, in the case of collections, access by index (array) or by sequence (list). Continuing this rationale, we notice the distinct purposes of sorting a finalized data set and real-time sorting. Thus, solutions that are put in practice are highly dependent on the application context, the choice being based on a set of criteria. Such a criterion is the magnitude order of execution time  (known as the O notation).

              An additional dimension of the client-side app’s life is the periodical refreshing of information and the means of storing / accessing it. The actual behavior has to be defined carefully, considering the possible lack of an internet connection. We can deduce the necessity of storing a data copy in the moment of refreshing so as not to affect user experience. As a case study, i’d like to point out the processing of an XML feed through the JSON serialization format. We have, therefore, referred to the data input and the observable behavior relative to algorithms. So, if we would map this activity on an IPO model (Input Process Output), the necessity of adjusting the actual process stands out, purpose which can be reached by parameterizing certain portions of code. This way the acceptance to future changes increases, so the programmer’s ability to anticipate flexibility in structures becomes very important. To define the parameterization both in hierarchy and in code, it can be encapsulated as attributes in an object, more such objects supporting more user profiles (personalized settings for every user). The adjusting can be done in the opposite way as well by collecting usage statistics, process which is divided into 3 steps:

               

              • initiating communication on the user machine
              • calling a server-side script
              • modifying a database

              In the present context, certain standards, like anonymity, have to be upheld. For example, you could assign every user the number of milliseconds which passed from 1970 to that install. This is implemented in many programming languages (e.g. in Javascript it’s new Date().getTime()) and this way you don’t need to store data that might be considered private, like the IP.

              part of the interface optimization can also be included in this section, with specific techniques:

              • dynamic definition: brings an improvement in memory usage and programming time
              • changing / destroying graphical elements: can be very important on mobile phones, since an app might be interrupted by a call, but after it’s over it has to be able to revert to the initial state
              • intuitive interaction
              • visual space: influenced by the container object, the actual functionality and by the equilibrium between volatile notifications and the ones which contain full sets of data
              • recycling graphical elements: through geometric models (e.g. rotation, for ‘back’ and ‘forward’ buttons)

              Note: by ‘volatile notification’ we mean a message that appears for a few seconds, not taking space an indefinite amount of time.

               

              Treating graphical resources

              How images are used, along with native loading mechanisms, can play an important role in optimizing the performance of a program, especially if they intervene directly in the dynamic behavior of a program. You can see below a comparison between the most spread formats. As the image dimension increases, the speed differences widen, but the order remains constant. So for small images the intrinsic features of these formats take precedence (choice should be based on the main purpose: transparency, color depth, etc.), while on bigger images additional metrics could be constructed to help us decide (e.g. [format loading speed] * [format storing space]).

              Figure 2: Speed ratings based on image loading stress tests on user machines
              (component of my bachelor thesis: “The systemic impact of client-side apps. Optimization techniques”)

               

              Anyway, apps diversify as we take into consideration more working parameters. For example, the actual process of reading data from a file can be highly dependent on the operating system, just like a graphical container object which displays an image. Also, the impact is different on mobile systems vs. desktop or laptop systems in the sense that the first almost always run on battery, resulting not just a need to optimize actual rendering of the image but also to reduce the number of renderings. Furthermore, in the context of connectivity which is more and more present in the 3rd millennium, multiple types of client-side apps have developed having defined dedicated computation devices (e.g. videoboards) and differentiated applicative purposes (e.g. video games vs. web browsers) as criteria. The current situation in technology is made hole by emergent formats like JPEG XR (also called HD Photo) promoted by Microsoft, APNG (Animated PNG) promoted by Mozilla or SVG (Scalable Vector Graphics).

               

              Conclusion

              We have gone over a number of techniques which are applicable in a large set of contexts, suggesting a unitary optimization strategy which starts in the design phase (or even specification) and reaches implementation. We’ve also touched the area of automatic comparisons. We’ll talk about the research possibilities they open in future articles.