CIO Review >> Magazine >> October - 2012 issue

Leading the Context-aware Mobile Application Development


Tuesday, October 2, 2012

CR Team Many first-generation authoring platforms and mobile strategies focus on replicating portions of customer and employee Web portals on a mobile device. This is a logical and necessary first step for enterprises, but it is also not the endpoint. New technology is enabling mobile-only capabilities that the "conventional-web" could never do.

Consider mobile-applications that can bring-up applications for boarding-passes, as soon as you reach the airport counter or act as Keys with near-field-communication (NFC) on arrival at a hotel. These applications leverage the mobile-only features such as location, motion, type of device being used and other proximity-sensors to adapt the behavior of applications to suit the convenience of the user. Thus, a travel-application that "knows" that you are going to play Golf, can automatically alert you to sudden rain or other changes in the weather ( picking up weather information on your course-location ) and a health-care application can simply speak to you or your loved-ones when it is time for the medication or monitoring the vital readings.

These new-generation applications your customers love to use have become possible, due to the recent advances in the location-aware, speech and other sensory technologies that are now part of most popular mobile devices. Speech and other modes of interaction for convenience of user are now standard on most mobile device-platforms.

However, authoring applications with these context-aware features need not require enterprises to deviate from the established web-like development paradigm or get locked-in to often proprietary or device-specific development.

"User experience is critical to the delivery of sophisticated mobile applications," says Shekar Pannala EVP & CIO of Bank of New York Mellon. "To that end, we have put in place the foundation technology for natural, device-appropriate interaction. As we roll out applications, we will be enabling context-aware interaction using voice, gesture and other natural interaction capabilities. Users will be able to simply say 'transfer money from account A to account B' as well as use touch or keypad inputs. While we may not implement all the features on day one, it is important to have an architecture for the future."

Context-awareness can make personal mobile devices more responsive to individual needs and help to proactively anticipate user's situation and intelligently deliver personalized information through real-world applications and services, using sensor inputs, data analytics, social networks and delivery context. Somerset, New Jerssey headquartered Openstream leads the pack in providing context-aware features for enterprises and authors of mobile applications through its W3C Open standards based Cue-me™ Mobile Development platform and was selected as a Cool Vendor in Context-aware computing by Gartner in 2011.

"Openstream's mobile platform enables enterprise-solutions help transform the daily activities of an executive on the move, through compelling and convenient speech and touch user-interfaces that can adapt to the user's situation and preferences," says Raj Tumuluri, CEO, Openstream. Many field-users tasked with form-filling type applications find it cumbersome to use the small keypads to input data on mobile devices, often resulting in partially filled forms and inaccurate data capture in the field.

So, a new paradigm of user interaction called multi-modality promises to enable enterprises overcome the challenges associated with user input and display on small devices. Apple's new SIRI speech and Swipe interfaces are some of the examples of such interaction-convenience on a mobile device.

Technically, the multi-modal approach or multi-modality refers to integrating graphics, text, and audio output with speech, text, and touch inputs to deliver a superior experience to the user. In essence, multi modal applications give users multiple options for inputting and receiving information.


Cue-me: World's First Context-aware Multimodal Mobile Authoring Platform With more than 700 types of mobile devices in use in the marketplace today, in all sizes and running various Operating systems and version, the prime goal for Openstream was to provide a robust and device-context-aware multi-modal authoring platform, which can adapt to newer device-features and interface technologies, that runs across all devices with appropriate and consistent features.

Openstream's Cue-me Platform based on asynchronous event driven architecture with its design principles based on to those of the World Wide Web consortium's Multimodal Interaction Framework (W3C MMI) architecture, viz., modularity, extensibility, encapsulation, distribution, and recursiveness. Openstream co-authored the standard, and actively contributes to various working groups with its key research focusing on improving accuracy and usability of mobile applications using context-aware multimodal interfaces.

Integrated Application & Security Management

Industry analysts affirm that the next generation mobile strategy will state how the enterprise will handle a mix of corporate- and employee-owned devices, making application management and security central to any enterprise mobile strategy. Openstream's cue-me™ platform is the first mobile development platform with integrated application management and adaptive security. Secure-container based application development enables offline-data store and access across enterprise mobile devices that can be centrally managed.

Revenue Model

Perpetual & Recurring-subscription (SaaS) License revenue for the platform, solutions and services. The integrated authoring, deployment and management approach helps achieve the lowest total-cost-of-ownership (TCO) for enterprise mobility.

The Road Ahead