Research topics in client-side technologies

    1253

    1.Context and technological basis

    In my previous articles i’ve treated improvements that can be brought not only to algorithms – and generally apps – but also to the process of their development. However, the degree to which these improvements are implemented today isn’t very high, so research topics arise around how to best help users and developers alike, based on the treated technologies.

    This is why the structure of this article is in a way reversed because we’ll start from tangible aspects (or ones that we would like to be that way) to concepts and abstract directions.

    One of the things we demand from programs is adapting to various user contexts, which implies dynamism, but when something adapts, we also expect it to do so automatically. If we associate the respective practical techniques, dynamic allocation and reflection under their various forms, the simplest, yet the most powerful purpose we can think of is connected to the ‘most universal’ unit of action: the algorithm. So this would translate to the automatic loading of the best performing algorithms for a certain purpose; in other words, given a sufficient amount of data, the program should act efficiently on its own, taking as much burden as possible off the user.
    2.Possible applications

    A purely software application, excepting the classic example of sorting, is decision. Decision making is one of those things that can always be improved, for example, in expert systems or games. Benefits can reach hardware and even more general physical applications by passing over the IT barrier. Exploring this outward flow of evolution, some of the targeted fields are:

    • Storing – with both of its major exponents: hard-disk and memory
    • Surface processing – fire extinguishing, etc.
    • Navigation – GPS, etc.
    • Health-care – imagistic investigations
    The process of reaching these, including through development specific improvements, is detailed in the following sections.
    3.Research directions

    A similar, barrier based, approach can be used on research directions by separating the entities which participate in or are affected by the specific scope. For example, a client is served by a server, and, as with any service, resources are consumed. These 3 elements (servers, clients and resources) can form the basis for a unitary perspective on what issues should be tackled in the future.

    3.1.Client-side – Server-side equilibrium

    How much should be client-side and how much server-side? It’s obvious that the trend is to move as much as possible online, so one view would state that the web browser is all we need. However, not everything is possible yet within a browser, but the creation of Javascript V8 and HTML 5 is working to that end. Another thing happens though as a consequence: the web browser is getting more powerful, which can be a pattern for any client appserver relation. This also suggests that we can apply the ‘barrier approach’ even deeper into this research model because:

    • servers rely on client apps to show their performance to users (without clients, servers are useless)
    • client apps rely on operating systems to function (without operating systems, clients apps can’t function)
    As we progress into the ‘server-side’ era, dependencies will go the opposite way as well (the operating system is nothing without a browser, and the browser is nothing without the web). This level of barriers, though, is bound to get blurry when Google Chrome OS will be released.
    As a sidenote, it’s interesting to observe that, while processing power available to the wide public increased, offline applications developed to take advantage of that power; then, as it braked around 3 GHz, online suites start to appear. This isn’t a “cause and effect” association but just a chronological note, as the absolute CPU power kept increasing through multiple cores, GPU, etc. A cause for such an association might simply be the fact that a lot is happening in a short time, a known IT characteristic, everything in this area being exponential. Thus, the speed with which the client-server equilibrium is changing can be an interesting subject of study as it can have an impact not only on software technology but on hardware industry as well, in areas ranging from economics to new standards and practices.3.2.Preloading – Cloud computing equilibrium

    This is different because it deals with data availability in the confines of the existing structure, described in the previous section. Streaming video is the best example: much of it, if not all, is loaded before you actually watch it. But what if the user pauses the video? or doesn’t even watch it to the end? A certain quantity of resources will then be wasted by the server side, boiling everything down to user experience vs. available resources (bandwidth, processing, etc.). It’s important, though, that this ratio can be measured so the service provider can choose what fits bets. Another, similar, example is online PDF viewing. Every page is (pre)loaded individually, usually as an image, and the same questions persist: Does the user scroll down through the whole document? Should the following pages be preloaded? If so, how many?

    If we were to remain on the subject of streaming, let’s take a look at an interesting design decision observable in YouTube’s HTML 5 version: the related videos thumbnails will be constructed using the <video> tag, the same way as the main video, so when the user will move the mouse over, these thumbnails will start playing right there, in their tiny assigned space. This is an excellent usability feature, but what will be the impact on server resources? The first answer that comes to mind is a negative impact because, excepting the main video, all these other small videos will have to be streamed. But an interesting perspective tells us that once users have more previews accessible at discretion, they will be able to navigate where they want faster and more directly, without watching videos that aren’t really interesting to them but which consume resources anyway. The result fits the sought equilibrium in a better way than the current state of things because it allows for much of the decision to be made more intuitively, through mouse moves, instead of complicated clicks and extra windows or tabs :) Of course, there can be many more contributing factors to this equilibrium, like size optimizations or other specifics, but we won’t get into such technical details; i just wanted to reveal a pattern in design evolution.
    However, not everyone is the same – we all use websites, and generally web services, in many different ways, which can be a problem for service providers when they want to please as many users as possible. But what if they deliver a customized version of the ‘equilibrium’ to every user?     This can be done with data mining, or even simple statistics, on each particular usage. For example:
    • it’s been determined that Michael watches 85% of his videos entirely;
    • John watches to the end of about 50% of the videos.
    So when these users hit the pause button, what if we could preload 85% of the remaining part for Michael and 50% for John? It’s a direct way to correlate individual usage with resource consumption. Again, these blunt figures might not be feasible, but the revealed rationale can lead to a substantial server-side optimization.

    3.3.Processing / Workload distribution

    This deals with the consequences of setting certain configurations for the so called equilibriums. Because processing also means workload, the needed resources and their quantities should be identified. Referring strictly to IT related issues, they translate into:
    • Energy cost distribution – equipment needs electrical energy to run. No wonder profile companies are green energy pioneers;
    • Wear distribution – there is a certain lifetime for every piece of equipment. When it stops working it has to be replaced (which is another cost), so data loss and degradation of service have to be prevented.
    At a first glance it may seem these points refer to the server-side, but they apply in exactly the same way to clients too. The workload distribution can be described through mathematical models which may differ from an app category to another. For example:
    • games: storage and workload are mostly kept offline – the server is just a platform (not without workload though :)
    • streaming: storage is kept online but the content is constructed offline
    • converters: there are many sites that convert files to various media or office formats – the storage is offline but the intensive workload is done online
    We can quickly observe 2 things: the client can’t avoid its share of contribution and storing data online increases the amount of work to be done because it always has to be delivered offline. In applications which will combine storage and processing online, some resources will be saved because the pre-processing delivery stays on one side (the server).

    4.Objectives

    There’s also the possibility of mapping this vision on other entities, not only clients and servers per-say. Both the ‘client’ and ‘server’ can be offline (a developer served by an IDE) or both can be online (social API providers, in-browser instant messaging, etc.). Assisting developers is a domain of interest because it enables faster development and more efficient implementation on user machines. For example, my bachelor thesis dealt with automatic comparisons of sorting algorithms; the results can be used to determine which algorithm is the best for a certain array size. Such an optimization can be made at runtime, by adding a size check in the deployed framework methods, but even that extra size check decreases performance. So why not help the developer in the implementation phase by creating plugins or extensions for the IDE to automatically detect the array size and choose the best sorting algorithm to begin with?

    The final objective of such research could be “decision modules”, which can be software and hardware alike. Think about what it would be like to have such autosuficient, portable and interchangeable modules which you can add, replace and improve separately either in a program (extensions, plugins, filters, assistants, etc.) or physically in your computer (dedicated chips, customized videoboards, etc.). The tangible target is, in the end, minimum cost and maximum response speed on computation and communication devices.

    5.Science Fiction

    Not only when implementing technological progress but also when contemplating future challenges, sci-fi’s role should be reiterated.

    Figure 1. Data and Lore (Star Trek)
    Fig. 1 reminds of Star Trek emotion chips which allowed androids to experience human feelings, or, at least, act accordingly. This inspires a specific application of the decision modules mentioned in the previous section: activity profiles which mirror the user ‘attitude’ and emphasize his individual needs. There’s a subtle nuance to this that Rama-Kandra, a program from the Matrix Revolutions, defines perfectly:
        Neo: I just have never…

        Rama-Kandra: …heard a program speak of love?

        Neo: It’s a… human emotion.

        Rama-Kandra: No, it is a word. What matters is the connection the word implies. I see that you are in love. Can you tell me what you would give to hold on to that connection?

        Neo: Anything.

        Rama-Kandra: Then perhaps the reason you’re here is not so different from the reason I’m here.

    Most of the times huge leaps in usability don’t need complicated AI-like investments as long as optimization opportunities are given enough importance and popularity.

    6.Conclusions

    Continuous technological improvements are undeniable at this point in history. As growth is becoming exponential and the world population is always increasing, the priorities and patterns of expansion should be defined at a global level in the interest of all parties involved: companies as commercial entities, developers as the driving force of the industry and users which essentially represent everyone.

    Resources

     

    Comments

    comments

    Teo is a software engineer from Romania, he works at Atoss developing a workforce management product. He is a former Google API Guru and Google Desktop 'Hall of Fame' developer.

    NO COMMENTS

    Leave a Reply