Home Featured

    826

    This month’s article is going to deviate from our recent topics. I’m a big supporter of green technology so I chose to write about the new trend in the automobile industry. Ever since man started out creating automobiles he has been using petroleum or diesel fuel. As the comfort and the convenience of automobiles got better the man moved to owning a vehicle rather than using other methods of transport. The industry spread into many markets, both large and niche that catered to many of the infinite needs of man. This industry continued to grow without many noticing the underlying power source that powered these machines. Or many may have turned a blind eye on the issue to rake in as much profit as they could get. Crude oil has been and will be a vital part of our lives even if you choose to ignore the fact. Its scarceness has led to many difficulties for the common man especially in terms of the transportation costs. The volatility of the prices of crude oil and its harm to the environment has led to a question whether we are using the power efficiently and whether we have other alternatives. Although few of the big names such as Toyota and Nissan had breakthrough technology which they are using with success, others did not give enough effort until their sales started to get affected. In this article I would not question the motives of the giants in the industry as it is not mine to ask such, but I will elaborate on few of news that allows us to hope for a better day.

    “From 20 to 230 miles per gallon within a year?”

    It did not take long for the GM to cut their losses, restructure itself, get out of bankruptcy and to unveil a brand new car. Not any car but a car that does 230MPG (MPG – miles per gallon) which is about 98km per litre. Yes this is true. Your good old Toyota must be doing bit more than one tenth of this so I guess this is pretty good news. This vehicle is built for city use which means this car would function best if it was travelled with in short distances. ‘Volt’ would run on electricity which would be powered by the fuel. The engine would consume 25kW of energy per 100 miles which I would say is pretty impressive. The 230MPG value came out from electric only driving so it may not be so high. Another impressive feature is that you could charge the battery using your ordinary power outlet at your home. So just like you charge your mobile phone, when you drive back home you would have to remind yourself to plug in your car too. So how much and when would it ship out? Chevy is giving an indication that it will be around $40,000 so it is bit expensive. Actually Chevy says that it would be losing out on the first few vehicles but surely they would love to have the tax credit that they receive for hybrid vehicles. You probably might see these vehicles on the road in the USA in 2010 which is not far away.

     

    Bizarre design

    It did not take weeks for Volkswagen to announce their new hybrid model. The model looks very different from what you and I are familiar of but it gives much more mileage than your ordinary car. At 170mpg Volkswagen L1 is still in the conceptual stage but the firm is very serious about going into production in 2013. It calls itself the “World’s most fuel efficient automobile” but I’m sure Chevy would have a thing or two to say about it. In theory this car could run 100km in just 1.38 liters which is brilliant.

    Green is the new red

    It would be sometime when the consumer chooses an environmentally friendly car to a red Ferrari. But trend is seems to be catching on. The big luxury brands are looking to make their gas guzzling monsters into more environmentally friendly vehicles. BMW was the third in this latest in introducing these high mileage automobiles. ‘BMW Vision Efficient Dynamics’ concept is not in the same level at the above two but it is far more efficient for sport type vehicles. This concept car encompasses the ‘BMW Active Hybrid Technology’ that is capable of managing the CO2 emission as well as maintaining the driving pleasure. The car is powered by 3 cylinder turbo diesel engine and two electric motors for both front and rear axels. With a top speed of 250km, it is able to run 100kms with just 3.78liters of diesel. CO2 emission is also reduced substantially, only to emit 99grams of CO2 for even 100km. When the electric motor kicks in, this emission rate goes down to 50gram per 100km. But in staying at par with other sports cars, ‘BMW Vision Efficient Dynamics’ is consistent with its aerodynamically designed and lightweight structure. It has 2+2-seater that allows driving pleasure for up to 4 occupants.

    Cool to be green

     

    There is no doubt that the popularity in the car industry has become more inline with how environmentally friendly their products are manufactured. There is nothing new in the hybrid concept. Toyota had been manufacturing hybrid vehicles for almost close to a decade. Nissan came out with a new hybrid few months back. It is about time that all the car manufacturers acknowledged the fact that they need to make efficient vehicles rather than introducing advanced features that drink up more gas. But how soon would you see these vehicles used by the common man? I would give at least another decade. In the mean time if you want to be really green I suggest you go for the newest Prius.

     

    Image Sources

    http://www.ecoautoninja.com/wp-content/uploads/2009/04/chevy-volt1.jpg

    http://autogreenmag.com/wp-content/uploads/2009/09/volkswagen-1l-concept-630-b.jpg

    http://cache.gawker.com/assets/images/12/2009/08/500x_BMW_Vision_Eff_01.jpg

    http://blog.leasetrader.com/images/blog_leasetrader_com/WindowsLiveWriter/ToyotasPriusisLessEfficientandEnvironmen_EA2B/2006.toyota.prius-thumb.jpg

      873

      millennium we notice emerging trends in the process of developing software applications, with the purpose of improving computational efficiency (maximizing response speed, minimizing electricity consumption). The diversification of interaction paradigms, not only between applications and users, but between development environments and programmers as well, is taking proportion in this technological period. .NET, sustained by new iterations of Windows for the PC and Windows Mobile, and Java, sustained by Linux (and Android, which is based on it), are constantly expanding as platforms. One can also see a similar parallelism between Mac OS and iPhone OS. These combinations between systems and programming languages define new practices of efficiency, which, along with existent ones, can be generalized in a global vision.

      In the given conditions, a unitary programming strategy is needed, as a software engineering standard which should consistently treat aspects that can be optimized, in the context of increasing pressure from the user community, eager of maximum performance with minimum costs  and improved effects on the environment. Furthermore, we can argue the necessity of real time solutions for decisional problems which can appear during implementation.

      The purpose of this article is to study the impact of client-side apps on local computation resources and to reveal optimization techniques which are usable on multiple programming platforms.

       

      Algorithmic optimization

      One of the aspects that can be optimized within a program is algorithm implementation. Its subject usually consists of data structures, therefore a detailed knowledge of them is needed, including the difference between primitive and wrapper types (e.g.int vs. Integer in Java), selection mechanisms (how the compiler sees these types) and the possibility of an object being immutable. Once this basis is established, certain structural profiles define themselves – for example, in the case of collections, access by index (array) or by sequence (list). Continuing this rationale, we notice the distinct purposes of sorting a finalized data set and real-time sorting. Thus, solutions that are put in practice are highly dependent on the application context, the choice being based on a set of criteria. Such a criterion is the magnitude order of execution time  (known as the O notation).

      An additional dimension of the client-side app’s life is the periodical refreshing of information and the means of storing / accessing it. The actual behavior has to be defined carefully, considering the possible lack of an internet connection. We can deduce the necessity of storing a data copy in the moment of refreshing so as not to affect user experience. As a case study, i’d like to point out the processing of an XML feed through the JSON serialization format. We have, therefore, referred to the data input and the observable behavior relative to algorithms. So, if we would map this activity on an IPO model (Input Process Output), the necessity of adjusting the actual process stands out, purpose which can be reached by parameterizing certain portions of code. This way the acceptance to future changes increases, so the programmer’s ability to anticipate flexibility in structures becomes very important. To define the parameterization both in hierarchy and in code, it can be encapsulated as attributes in an object, more such objects supporting more user profiles (personalized settings for every user). The adjusting can be done in the opposite way as well by collecting usage statistics, process which is divided into 3 steps:

       

      • initiating communication on the user machine
      • calling a server-side script
      • modifying a database

      In the present context, certain standards, like anonymity, have to be upheld. For example, you could assign every user the number of milliseconds which passed from 1970 to that install. This is implemented in many programming languages (e.g. in Javascript it’s new Date().getTime()) and this way you don’t need to store data that might be considered private, like the IP.

      part of the interface optimization can also be included in this section, with specific techniques:

      • dynamic definition: brings an improvement in memory usage and programming time
      • changing / destroying graphical elements: can be very important on mobile phones, since an app might be interrupted by a call, but after it’s over it has to be able to revert to the initial state
      • intuitive interaction
      • visual space: influenced by the container object, the actual functionality and by the equilibrium between volatile notifications and the ones which contain full sets of data
      • recycling graphical elements: through geometric models (e.g. rotation, for ‘back’ and ‘forward’ buttons)

      Note: by ‘volatile notification’ we mean a message that appears for a few seconds, not taking space an indefinite amount of time.

       

      Treating graphical resources

      How images are used, along with native loading mechanisms, can play an important role in optimizing the performance of a program, especially if they intervene directly in the dynamic behavior of a program. You can see below a comparison between the most spread formats. As the image dimension increases, the speed differences widen, but the order remains constant. So for small images the intrinsic features of these formats take precedence (choice should be based on the main purpose: transparency, color depth, etc.), while on bigger images additional metrics could be constructed to help us decide (e.g. [format loading speed] * [format storing space]).

      Figure 2: Speed ratings based on image loading stress tests on user machines
      (component of my bachelor thesis: “The systemic impact of client-side apps. Optimization techniques”)

       

      Anyway, apps diversify as we take into consideration more working parameters. For example, the actual process of reading data from a file can be highly dependent on the operating system, just like a graphical container object which displays an image. Also, the impact is different on mobile systems vs. desktop or laptop systems in the sense that the first almost always run on battery, resulting not just a need to optimize actual rendering of the image but also to reduce the number of renderings. Furthermore, in the context of connectivity which is more and more present in the 3rd millennium, multiple types of client-side apps have developed having defined dedicated computation devices (e.g. videoboards) and differentiated applicative purposes (e.g. video games vs. web browsers) as criteria. The current situation in technology is made hole by emergent formats like JPEG XR (also called HD Photo) promoted by Microsoft, APNG (Animated PNG) promoted by Mozilla or SVG (Scalable Vector Graphics).

       

      Conclusion

      We have gone over a number of techniques which are applicable in a large set of contexts, suggesting a unitary optimization strategy which starts in the design phase (or even specification) and reaches implementation. We’ve also touched the area of automatic comparisons. We’ll talk about the research possibilities they open in future articles.

       

        1127

        Definitions

        Augmented Reality (or AR) has many definitions. One of them describes it as
        a field of computer research which deals with the combination of real-world and
        computer-generated data (virtual reality), where computer graphics objects are
        blended into real footage and representations in real time.
        Reality augmentation will touch many areas in the future, making the
        borders between current concepts fuzzier. However, we can already distinguish
        the tools and basis upon which this paradigm relies (among them: networks,
        sensors, interface, etc.) which makes it easier for us to grasp what AR really
        means.

         

        Fields of application

        It already affects multiple categories of activities, reaching beyond
        research or prototypes and entering the realm of practical and commercial
        solutions. Such categories include education, industry, advertising,
        entertainment and tourism. Thus, we could unite future products in a separate
        one.

         

        Educational projects can bring, for example, storytelling to a new height,
        through interactive events or specially designed books. It’s a good way to start
        learning about AR too because we’re introduced to two elements: the camera and
        the fiduciary markers. These two elements are generally encountered in all of
        the mentioned categories. The camera is needed as a video input device of
        course, and the fiduciary markers are nothing else but the distinguishable
        models which can be processed by the software component. This not only provides
        data on the 3D position and direction based on an angle considered in a
        (geometric) polar system, but also allows appropriate rendering of virtual
        objects onto a real medium. Another educational purpose can be compiling an
        interactive encyclopedia which allows for 3D structure visualisation,
        proof-of-concept lessons (e.g. the game of chess) or even travelling around the
        world by turning pages. A tie-in to actual medical uses can also be achieved by
        learning anatomy or naturally displaying investigation results on a human
        placeholder so these processes can become more intuitive.

         

         

        Industrial applications are already in use on a large scale. For example,
        AR can be used on assembly lines for guidance to putting together or repairing
        certain devices. In the current context, the relation to advertising is clearly
        visible through interactive ads for automobiles or windmills. For example,
        Toyota offers
        software
        that anyone can install – you only need a webcam and a printer (to
        print the fiduciary markers which are provided in a PDF document). On the same
        rationale, tourism offers even more possibilities. The Wikitude app for Android,
        essentially a travel guide, creates an overlay of information on real footage of
        a location, provided by the mobile phone itself. Another amazing breakthrough is
        represented by Microsoft Photosynth, a new way of visualizing and putting
        together large image sets through incredibly sensitive interactions. Navigation
        paradigms also include link based movement, which can also be translated into
        expanding or contracting photo contexts, being itself a new way to travel or to
        find out information. CNN heavily popularized Photosynth when Barack Obama was
        installed as US president, but further used technology to enhance communication
        when Wolf Blitzer talked to a reporter’s hologram.

         

         

        Entertainment is another category that already implements AR features
        ranging from mobile phone games to stereoscopic imaging which allows for actual
        3D worlds to be created and rendered based on the user’s physical point of view.
        Game producers are already releasing games with such capabilities, but it’s
        likely that we’ll see an explosion of products in this field.

         

         

        Present and Future

        The future looks very promising, with upcoming devices such as Morph, by
        Nokia, which is enabled by nano-technology and which showcases a never before
        seen flexibility and wealth of features. For example, you can wear it like a
        watch and even spill liquids on it without affecting it.

         

        Another concept, only in prototype state for now, is Sixth Sense. It brings
        interaction to a whole new level, significantly improving various aspects of
        users’ lives. A central component is the cloud computing, which already proves
        immense benefits and ubiquitous characteristics.

         

        A special category of readily available products include interactive
        surfaces. There are corporate solutions, like Microsoft Surface, but projects
        that seek inexpensive ways to achieve the same results, and even more, such as
        iDisplay are already starting to
        appear.

         

        Conclusion

        This article aims to summarize state-of-the-art concepts and products in
        AR, detailing software and hardware for everything from large-scale devices to
        mobile phones. However, since portability is a crucial issue in the beginning of
        the 3rd millennium and the industry is seeing spectacular
        developments in this area, especially with the birth of iPhone and Android, it
        will be interesting to look at the connection between interface and
        functionality in mobile devices, especially since programming for these devices
        is very similar to programming on classic computers, but also has specific
        efficiency standards which we’ll look at in future articles.

         

        Resources

        Polar coordinate
        system
         (Wikipedia article)
        Microsoft Photosynth (Official
        website)

        Sixth Sense
        (Official website)

        Microsoft Surface (Official
        website)

        iDisplay (Official
        website)

         

          997

          Physical connectivity was never the most popular within the technological world. Although physical connectivity was much more reliable, secure and better in quality the man always preferred its wireless counterpart. The convenience of not being connected physically was more appealing to the human which suited his active lifestyle. The traditional telephone is loosing the battle against its mobile successor. The Wi-Fi is gaining ground connecting people to the internet. But can we claim to have a truly wireless life? The biggest and the most important aspect, the power keeps us connected physically. You may argue that you are wireless if you are using a laptop but it is still as long as the battery is alive. Even with new technologies emerge in the area of long lasting batteries it is still has to be connected physically to be recharged. Have we hit the wall in trying to discover a wireless life? Maybe not. This year’s Consumer Electronics Show (CES) unveiled the very technology that would change our lifestyle dramatically in the next decade.

           

          Wireless electricity is a concept where your devices no longer needed be connected to power sockets to obtain electricity. The power would transfer to your device without any connection such as a traditional copper wire. For example your television would no longer need any plugs but it would obtain the required electricity wirelessly from a particular position or positions within your house. Your mother would no longer need to find the required power cables for her food processor. It could just stay on the countertop and do the required food processing activity.

           

          So how does this work? Well the best method yet found is through induction. A transformer in the street or the charger of your mobile phone uses this particular principle. But the biggest drawback is in this method is that it is only short range. The receiver should be at a close proximity of the transmitter. As the distance between the receiver and the transmitter increases, the efficiency of the transmission reduces thus resulting in a large wastage of power. “Resonant induction” has overcome the above drawback. Resonant coupling enables the transmitter and the receiver to be tuned to a mutual frequency that is more efficient than the usual sinusoidal wave form. This rectangular or transient waveform is able to transmit electricity of a range of many meters.

           

          Now let’s leave the nerdy stuff and get into the more geeky information. “Fulton Innovations” was one of the biggest names in the CES this year. Their technology “eCoupled” is believed to be used by wide range of consumer devices.  “eCoupled” uses circuit boards and coils to transmit energy but still only works effectively at close ranges. Its current product, a charging mat enables any portable consumer device to be charged when it is laid on top of it. Yes, no wires. Just lay your ipod or your mobile on the mat and it will power up your devices. The same application is extended to the kitchen where you can give electricity to your devices by just placing the appliance at a particular place in your kitchen counter. If you want to stop the device you just need to remove the appliance from that particular place. Currently they say that the mat can transmit 1,500 watts of electricity through a kitchen counter at 98.6% percent efficiency. Although “Fulton” was the “Wow” of the CES 2009, there were others who brought similar and equally impressive technology. One application was a wireless auditorium where none of the devices were connected physically. A laptop and projector were connected wirelessly to carryout a presentation and at the same time both devices were being powered by wireless electricity. In another few years you would no longer need to carry your power or VGA cables but instead you could open up your laptop and you will be connected to the projector wirelessly and at the same time your laptop would be powered by wireless electricity. Remember the mobile credit reload method that allowed one customer’s mobile credit to be transferred to another customer’s mobile credit through a text message? Well what if you could juice up the power of your mobile through similar means. Well you can. A company had developed a technology to transfer a power of a consumer device to another consumer device through wireless electricity. Furthermore it also allows you to “Daisy Chain” these devices which allow the chain of devices to transfer electricity as collection of transmitters and receivers. So the next time your mobile runs out of power you can just borrow some from your friend’s ipod or mobile.

           

          So is wireless electricity a new concept. Not really. This notion has being around for many years but none of them have gone beyond the initial prototype. Most recent was the British company “Splash” who introduced their own charging pads in 2004. How would it be different this time? Would “Fulton” become another one of those bankrupt firms in a few years? Maybe not. Wireless Power Consortium was formed in last December which is dedicated to the establishment of a common standard for wireless charging. Few big names such as Philips, Sanyo, Logitech, Texas Instruments have joined the consortium including Fulton. This would allow a development of a single efficient method of wireless power transmission. This biggest challenge they would face is maintaining a high efficiency. Although you may argue that there is also a loss of power during the traditional transmission through a copper wire it is considerably lower than the loss of wireless electricity. Unless this particular drawback is sorted out, there is no incentive in moving into this technology no matter how cool it looks. Let us remain optimistic. This very well could be our next life changing innovation after the computer.

           

          References

          http://www.forbes.com/2009/01/09/ces-wireless-power-tech-sciences-cx_tb_0109power.html

          http://gadgetwise.blogs.nytimes.com/2009/03/13/wireless-electricity-not-so-far-off/

          http://www.engadget.com/photos/fulton-innovation-wireless-power/

           

          Image sources

          http://emergingtechnology.files.wordpress.com/2007/10/wireless-power-31.jpg

          http://sgstb.msn.com/i/3B/2D585DC4F57FB45D1912281B78E.jpg

          http://www.geekologie.com/2008/08/25/wireless-power.jpg

          955

          It all began in 1973 when the first mobile phone call was made by Dr Martin Cooper (former general manager at Motorola) to his rival Joel Engel who was Head of Research at Bell Labs. Dr Martin Cooper invented the mobile phone in 1973 but it was sometime before mobile phones were available commercially.

          It was not until 1989 that Sri Lanka was introduced to mobile telephony by Celltel Lanka Limited (now rebranded as Tigo). It is worth noting that Sri Lanka was the first country in South Asia to be introduced to this service. Back in the time, handsets were large, expensive and typically used only by well to do high flyers. Today things are very much different: nearly 40% of Sri Lankans have a mobile phone. It is predicted to reach 50% penetration by mid 2009.

          So with nearly half the population carrying a mobile phone, it is fair to say that it has become the new mass media. Statistically, it is the 7th mass media. The traditional mass media are well known and established with known formats. Starting with Print (dating from the 1500s), it introduced the business model of owning a book and introduced advertising and subscriptions to newspapers and magazines. With the invention of a sound recording device by Thomas Edison in 1877, it (Recordings) became the new form of mass media in the 1890s.

          Cinema soon followed (1900s) with moving images and multimedia content and the business model of paying every time you viewed a movie. In the 1910s, Radio broadcasting was introduced and this brought about a ‘streaming’ approach to content delivery (that is, if you didn’t listen, you would miss the content). Sri Lanka Broadcasting Corporation became the first radio station in Asia when it started broadcasting on experimental basis in 1923. Radio was a powerful medium as it was received simultaneously by all once the content was broadcast. Television (1950s) bridged the multimedia present in the cinema with the broadcasting in radio. TV has been a dominant mass media for the past 50 years.

          1990s brought about a shake in the mass media industry. Imagine having all the previous mass media replicated in one medium. Yes, enter the Internet. Read a book, download a recording, watch a movie, listen to radio, view TV: you name it and it can be done. Add 2 more features to it, and it’s a threat to the previous five media. Interactivity and search. We don’t end our connection with an article by just reading it. We can respond immediately by sending a comment on how we feel about the article. It has opened a new window towards bringing the world closer by connecting people. Search has become the most used application on the web and has made companies such as Yahoo and Google worth billions of dollars. With such a big player in the market, is there any room for a newer form of mass media that can replicate the success of the internet or the other 5?

          Enter the 7th mass media, the mobile phone. Like the internet before, it is able to replicate everything the previous 6 mass media can do. Mobile media’s influence will be greater than all we’ve seen so far of the internet, so much so that mobile to internet will be as dominant in its media audience reach and media impact on society as TV was to radio in the second half of the last century. Don’t believe me? I wouldn’t either until I read what is to follow.

          The mobile phone has a number of prominent unique benefits not available on previous mass media. Firstly and most importantly, mobile phone is the first truly personal medium. We do not share it even with our spouse. It is that personal. Secondly, we always carry it around. Even going to bed, we would sleep with the phone physically in bed. Most of us even use it as our alarm clock. Which brings us to the third benefit. The phone is the first always-on mass medium. It is now catching on in Sri Lanka for people to get alerts via SMS onto one’s phone.

          The fourth benefit is of equal importance. The phone has a built-in payment mechanism. No other medium has a built-in payment mechanism, even on the internet you have to provide a credit card or subscribe to a service like PayPal, etc. But already today, older media collect payments through the mobile phone. TV shows like Super star earn millions via SMS votes.

          With phones coming with built-in cameras and prices slashing, more people are able to afford a device which can nearly replace the digital camera. As the cameraphone (also our video recorder) is in our pockets always ready to snap images and clips, we rarely need to use a digital camera which is safely stored away under the camera case at home). With a fast paced volatile world, it is possible to capture unique events using the mobile device and then share it with the world by submitting the user generated content into YouTube or CNN’s iReport thereby radically changing the media world.

          With a high level of young adults using a mobile phone, it has become a trend for them to fiddle with their phones while idling among social gatherings or on a journey via bus/train. If not sending a text message, they would be busy playing an addictive game downloaded free from the web via GPRS.  These are potential hot spots for companies/advertisers to seriously think about, not in the future, but now. They can incorporate advertisements embedded within mobile games which allow the game to be made available for free, thus reaching a maximum user base. The possibilities are endless.

          Finally, the seventh benefit is that a mobile phone captures the most accurate customer information in any medium. On a report in May 2007, AMF Ventures measured and found that TV is able to capture about 1% of audience data and 10% on the Internet. However, on a mobile, 90% of audience data can be identified.

          What is important to note, is that the phone will not kill other medium, they will all adjust, like radio did to TV.

          So with the above facts noted, it can be fair to say that mobile advertising is here to stay and could revolutionize the way it will penetrate the end user. With a high level of precision that is not even present on the web, targeted and personalized advertising content would make the end-user actively participate in the promotions. Here in Sri Lanka, Value Added Services (VAS) for mobiles is still in its infancy. The mobile networks in Sri Lanka have a lot of work ahead and should educate its subscribers to the doors of VAS. For advertising firms, if the importance of the 7th mass media is not taken seriously, be prepared to fall out from the competition.