|
|
==Design decisions==
|
|
|
|
|
|
The world is entering an era of pervasive computing where networking and computing capabilities are being integrated into the physical world more and more. This means that we have to deal with highly distributed systems of networked interactive devices. Ambient Intelligence as the technological discipline underlying AAL builds on the paradigm of pervasive computing in order to create smart environments that are tailored to users' specific needs. A challenge that arises here is the wish to keep the multiplicity of functional units hidden to humans while making the functionality provided by them easily available by supporting natural ways of interaction among human users and uSpaces. Considering that uSpaces target the assistance of several types of users with different needs, preferences, and cultural and educational backgrounds, this will bring to the fore many additional challenges, such as adaptability, customization and personalisation. Therefore, a special attention has to be put on designing interactive means that could adequately cope with this nontrivial and demanding requirements.
|
|
|
|
|
|
Several characteristics of uSpaces described above has been already captured by the universAAL Reference Model for AAL (uRM4AAL). A fundamental concept in uRM4AAL, which has influenced the definition of the scope of the UIM expert group in [[Home|Chapter 1]] very much, is the concept of input and output channels. As opposed to the traditional Human-Computer Interaction, Human-Environment Interaction (HEI) must consider the fact that several input and output devices that realize the input and output channels might be distributed overall in an uSpace. Examples of such I/O devices in a modern home environment are:
|
|
|
* one or more TVs and hi-fi devices with possibility to have access to their displays and loudspeakers,
|
|
|
* displays mounted on the wall in several rooms (esp. in the entrance) possibly equipped with integrated cameras, microphones and loudspeakers,
|
|
|
* other displays integrated in diverse appliances (e.g., in the fridge door),
|
|
|
* mirrors capable of becoming displays,
|
|
|
* webcams, gaming cameras, loudspeakers and (arrays of) microphones installed probably in many "corners" or rooms and / or integrated in the home appliances (e.g., phones providing displays, microphones, and (loud)speakers), and
|
|
|
* computing devices, such as notebooks and tablets and personal devices such as smart phones all with integrated I/O devices
|
|
|
This is why the UIM group definition already uses the term '''I/O infrastructure''' when talking about the set of concrete I/O channels available in an uSpace. The definition stresses that the concrete I/O infrastructure in one uSpace might differ substantially from the concrete I/O infrastructure in another uSpace and deduces that a major challenge in HEI is the separation of application development from the management of the concrete occurrences of the I/O infrastructure in concrete uSpaces, thereby assuming that a UI Framework based on the paradigm of HEI sits in-between:
|
|
|
|
|
|
[[https://raw.githubusercontent.com/wiki/universAAL/ui/UI-Framework_basic-role.jpg|245px|center|A UI framework separating the application & presentation layers.]]
|
|
|
|
|
|
=== UI Handlers and the I/O Channels ===
|
|
|
|
|
|
A question that arises here is: how comes that the above-mentioned "management of the concrete occurrences of the I/O infrastructure in concrete uSpaces" is symbolized as a set of "UI Handler"s in the figure above? The answer is:
|
|
|
* the term "UI Handler" is shorter than other terms, such as "I/O channel manager" and is more specific because of substituting the very generic abbreviation "I/O" with "UI" which is immediately associated with User Interaction / Interface (in most of the cases, the UIM expert group uses UI as an abbreviation for User Interaction though).
|
|
|
* As a matter of fact, these components are seen responsible for handling requests for interacting with human users; to do so, they need to utilize the available I/O channels but are not necessarily supposed to take over the binding and management of those channels (see also the discussion about the next figure below).
|
|
|
* The "manager" of a single input or output channel is usually not able to handle the whole of a user interaction (UI) request sent by an application because applications often wait for user response in the context of some info to be presented to the user; as a result, in many cases, as least one output channel and one input channel are supposed to be utilized simultaneously by the same UI handler in order to be able to interpret user input in the context of the output presented to the user.
|
|
|
* Although it might be possible to develop one single UI handler that is able to utilize all kinds of I/O channels available in an uSpace, the platform should be open with regard to more specific UI handlers that are "experts" in utilizing certain kinds of I/O channels and guarantee a richer user experience. This might even go beyond the "expertise" in utilization of I/O channels so that in future UI handlers might emerge that are experts in interacting with certain types of users.
|
|
|
Now, it's time to have a more precise look at the relationship between UI handlers and I/O channels: The uRM4AAL states that channels are realized by certain devices but does not say anything about their binding and availability in the virtual realm. In the figure below, we extend the specifications from the uRM4AAL by introducing the concept of ''Channel Binding''; the same concept can be used for both input and output channels:
|
|
|
|
|
|
[[https://raw.githubusercontent.com/wiki/universAAL/ui/I-o_channels.jpg|800px|center]]
|
|
|
|
|
|
It should be obvious that the device that realizes a channel has to have an integrated solution for providing access to the channel. This might be very low-level, at hard- and firmware level to exchange certain bits and bytes through a physical connector interface or already software using higher-level protocols and abstractions; we call the first type of provisions ''Embedded Binding'' and the second type ''Driver''. Drivers might be universAAL-aware or not; they might be provided by third-party and / or run on third-party devices. A driver might wrap another driver in order to comply with higher-level abstractions[[#Footnotes|<sup>[1]</sup>]]. In particular, the higher-level abstractions might only make sense in the context of a framework created for certain purposes (cf. a windowing system that uses a mouse driver and already interprets the mouse events in the context of the bunch of objects in the framework). If a driver is universAAL-aware, it provides data to the virtual realm by publishing context events to the context bus and it renders services in the virtual realm by registering some service profiles to the service bus. The next figure summarizes this understanding in terms of a concept map:
|
|
|
|
|
|
[[https://raw.githubusercontent.com/wiki/universAAL/ui/Access2channels.jpg|750px|center]]
|
|
|
|
|
|
It should be obvious now: Which driver at which abstraction level is used in the development of a UI handler is the decision of its developer!
|
|
|
|
|
|
[[https://raw.githubusercontent.com/wiki/universAAL/ui/UIhandler2Driver.jpg|550px|center]]
|
|
|
|
|
|
It is worth to mention that often an input or output channel can be associated with a certain location (the location of the device that realizes the channel), a modality, and a privacy level (meaning if the channel is appropriate for private communication, like in case of a telephone's handset or the screen of a cell phone). These are all important characteristics when it comes to context-aware and personalized selection of an appropriate UI handler, which we suppose to be a crucial task of the UI framework.
|
|
|
|
|
|
To close the discussion about the understanding of UI handlers, we would like now to address the analogy to the role of browsers in the Web. As generally known, Web applications use HTML to separate their application logic (model & control) from their user interface (view) and delegate the visualization of this interface to arbitrary browsers about which they usually make no assumptions. The UI framework in universAAL can be seen as an extension to this model by going further and not caring about the kind of component that "renders" the user interface (why should we be limited to browsers) and not caring about the modality used to present the info (why should it be just GUI-based). Multimodal UI handlers are the logical step to go beyond the Web browsers of today in the specific setup of an uSpace.
|
|
|
|
|
|
One might say that even Web browsers can be developed with support for multimodality (there are already activities going on, e.g., the [http://www.w3.org/2011/04/webrtc/ Web Real-Time Communications Working Group] that has published its [http://www.w3.org/TR/2011/WD-webrtc-20111027/ first draft specification] on Oct 27, 2011); that's true, and we must monitor these developments and use / learn from them! However, we see for uSpaces two degrees of freedom that are not given on the Web per se:
|
|
|
* UI handlers in uSpaces can use the universAAL middleware to utilize universAAL-aware channel bindings distributed in an uSpace instead of being limited to only locally available drivers
|
|
|
* Compared to the Web, the UI framework for uSpaces can move easier beyond HTML, which is not really modality- & layout-neutral, and make use of the results of activities, such as [http://www.uiml.org UIML] and [http://www.w3.org/MarkUp/Forms/ XForms], that target the development of applications beyond the browsing of Web pages.
|
|
|
|
|
|
=== UI Model ===
|
|
|
The elements of a UI model as understood by the UIM expert group are:
|
|
|
* a language and model for describing user interfaces,
|
|
|
* the protocols between the UI framework and the pluggable components, i.e. UI handlers and applications,
|
|
|
* the adaptation parameters that might influence the above protocols to achieve adaptive UI (see the next section), and
|
|
|
* possibly also languages and models for the exchange of data between UI handlers and device drivers (<u>postponed to the next iterations</u>; however, the specifications of the [http://www.w3.org/2002/mmi/ W3C Multimodal Interaction Activity], such as [http://www.w3.org/TR/emma/ EMMA], seem to be highly relevant here).
|
|
|
As mentioned in the last bullet in the previous section, there are several approaches for defining a language and model for describing user interfaces. The UIM expert group must examine them and choose a solution that can build on top of the provisions by the other universAAL expert groups that deliver the more basic models, i.e. the Middleware, Security, Service Infrastructure, and Context Management expert groups.
|
|
|
|
|
|
All the design decisions here are in the scope of the abstract building block [[UI Framework#Refinement in terms of software artefacts|UI Manager]] and hence will be discusssed in [[UI Framework|Chapter 3]].
|
|
|
|
|
|
=== Adaptation ===
|
|
|
One of the main incentives which help motivating older people to use technology is to provide them with simple, intuitive and easy-to-use interface [[#Footnotes|<sup>[2]</sup>]]
|
|
|
Modern information systems often suffer from usability issues and many of them are attributed to the complexity of user interfaces. User interfaces for elderly are even more special since they have to deal with very specific requirements. Constraints on user capabilities can be categorized according to main groups of impairments:
|
|
|
* physical abilities (include motor disabilities, etc.)
|
|
|
* perception (include understanding of visual and audio artefacts, touch, smell and taste) and
|
|
|
* cognition (include memory aspects, reasoning,etc.).
|
|
|
Interaction constraints have to be properly addressed and interaction obstacles avoided as much as possible in order to increase usability of AAL services.
|
|
|
Adaptation of the user interfaces has thus become one of the most important aspects that need to be addressed in order to best meet end user expectations and needs.
|
|
|
Inference and reasoning are more and more being used to adequately capture users' intentions. There is an obvious trend towards easily adaptable and customizable user interfaces. In this aspect proper modeling of end user(s) becomes increasingly important.
|
|
|
Adaptive user interfaces (AUIs) can be defined as "... a software artifact that
|
|
|
improves its ability to interact with a user by constructing a user model based on partial experience with that user." [[#Footnotes|<sup>[7]</sup>]]
|
|
|
Adaptation is influenced by following four variables [[#Footnotes|<sup>[8]</sup>]]
|
|
|
* User: AUIs can adapt to user's preferences, knowledge and skills.
|
|
|
* Task: Adaptation helps in the user's current activity.
|
|
|
* System: Adaptation adjusts device capabilities and variables such as network connectivity.
|
|
|
* Context: Adaptation according to user's current context.
|
|
|
Personalization refers to the optimization of a system's interface according to the end user's needs and preferences. It can be:
|
|
|
* a) user-specified or
|
|
|
* b) learned by the system.
|
|
|
In contrast to customization which is user-initiated and user-driven (personalization) process, adaptation is system-initiated and system-driven (personalization) process.
|
|
|
Here it is also important to distinguish terms such as ''adaptivity'' and ''adaptability'' which are often used synonymously.
|
|
|
The system is ''adaptive'' if it is able to automatically change its own characteristics according to end user needs but it is ''adaptable'' if it provides the end user with tools that make it possible to change system characteristics.
|
|
|
In relation to customization and personalization it can be said that personalization is more general term for adaptivity and customization is more general term for adaptability.
|
|
|
Adaptation can address:
|
|
|
* a) information that is to be presented (''information'' adaptation),
|
|
|
* b) way of presenting this information (''presentation'' adaptation) or
|
|
|
* c) how to interact with the presented information (''interface'' adaptation).
|
|
|
In universAAL applications are responsible for providing the content so information adaptation is out of scope for UIM Expert group. Most focus is on user interface adaptation in terms of interaction techniques.
|
|
|
|
|
|
=== Multimodality ===
|
|
|
''Multimodality is the ability of a software system or end-user device to allow users switch between different modes of interaction such as from visual to voice to touch, according to changes in context or user preferences. Multimodal systems may enable the usage of different types of input methods, even contemporaneously, such as natural language, gestures, face and handwriting characters in the same interacting session.''[[#Footnotes|<sup>[3]</sup>]]
|
|
|
As current trend is that applications are getting more ubiquitous interaction is also headed in the same direction.
|
|
|
Different modalities are no longer exclusively addressed and evaluated separately. They are no longer exclusively used in applications that run in controlled environments.
|
|
|
Multimodal interfaces become more and more important because of the current demands and trends where no single interaction modality can achieve wanted effect on its own. That is why collaborative and complementing approach is needed to interpret the data. Multimodal systems are especially appropriate for older people who often suffer from reduced sensory, physical and intellectual capabilities [[#Footnotes|<sup>[4]</sup>]]. Older people are in most cases not used to standard interfaces.
|
|
|
Every way of interacting needs a special interface. Common categorization is into:
|
|
|
* a) sensorial (digital augmentations of physical objects through sensory perception, e.g. augmented reality, 3D vision, scent-based interfaces, tactile, force, positional feedback),
|
|
|
* b) spatial (3D or 2D models of entities that the user can understand, e.g. touch screen, eye tracker, gesture based interaction) and
|
|
|
* c) natural language interfaces (linguistic entities such as words and phrases, e.g. voice, handwriting, recognition).
|
|
|
All mentioned ways of interaction can be addressed by a proper UI Handler.
|
|
|
Since the development of specific handlers is a vast task for itself in universAAL only examples of some of them are developed and described in chapter [[Common UIs|4 - Example UI Handlers in universAAL]].
|
|
|
|
|
|
=== Decision about reusable software ===
|
|
|
One of the '''reasons of above classification''' lies also in a fact that
|
|
|
Matching and interpreting inputs from different modalities on one side and presenting the output to the user throughout the assigned user interface elements on other side is very important, but is not enough. It has to be possible to manipulate user interfaces. In terms of input to alter input configurations and in terms of output to be able to replace presentation elements in any time. Overall interaction experience must be excellent if we want users to be comfortable using AAL solutions and if we want they are commonly accepted. Another thing is that we must think in terms of different design needs, one of which is most certainly the possibility of brokering between presentation and application layer.
|
|
|
|
|
|
A cross expert groups '''consultations''' were established to coordinate the overall actions and steps. This was especially important at first when groups have worked very intensively on defining their scopes and when overlappings had to be resolved.
|
|
|
This way it was assured that every aspect will be covered by most appropriate expert group.
|
|
|
|
|
|
When having agreed on the group definition a '''collection of information relevant for this group was initiated'''. Information from the input projects was summarized in a documents which structure was aligned with group definition. In addition to this documents, most important links and papers were also gathered and uploaded to group's storage space where every project member had full access. This way all interested people could familiarize themselves with the work of this group. Furthermore, they could see timetables for scheduled VoIP discussions on which they were welcomed to join and also to see and contribute to discussions in group's online live text collaboration documents (more specifically in typewith.me).
|
|
|
As an addition to input projects UI group got in '''contact with people outside the project''' as well and as a result of that an expert representing Universal Remote Console Consortium joined the group and future collaboration with people from Open Health Tools (OHT) was established (since their work on portable UI was still in proposal phase).
|
|
|
|
|
|
After discussing main concepts from input projects as well as Universal Control Hub (UCH) architecture (as a most used implementation of URC framework and most suitable one within AAL context) Persona and URC solution were considered most promising (and even a hybrid solution was considered at one stage) but after even more detailed inspection only '''Persona solution remained as the one with the best mapping to UI group definition'''. Most important reason for dismissing URC solution was the fact that it did not address the problem how to facilitate UI in smart interaction. (Albeit URC solution was not adequate for UI group it was seen that some other expert groups could benefit from this solution.)
|
|
|
Although we have hopped to see several alternate solutions from input projects from which we could pick the most appropriate ones for the base of our future work a closer look showed that they do not adequately address our specific problems.
|
|
|
|
|
|
==Analysis of input projects==
|
|
|
|
|
|
The following table provides a comparison between the input projects analysing how well they accomplish the design decisions taken.
|
|
|
|
|
|
<table cellpadding="3" cellspacing="0" summary="" border="1" bordercolor="000000" align="center">
|
|
|
<tr>
|
|
|
<td bgcolor="" align="left" valign="middle">'''Category'''</td>
|
|
|
<td bgcolor="" align="left" valign="middle">'''Input project'''</td>
|
|
|
<td bgcolor="" align="left" valign="middle">'''Description'''</td>
|
|
|
</tr>
|
|
|
<tr>
|
|
|
<td bgcolor="" align="left" valign="middle">UI model</td>
|
|
|
<td bgcolor="" align="left" valign="middle">Amigo</td>
|
|
|
<td bgcolor="" align="left" valign="middle">About the communication between the layers no more accurate information is provided at least in D4.1. There is only mentioned that they communicate through well-defined interfaces and protocols but make no real successions for them. Additional in Amigo D4.1 the proposal of a Multimodal Interface Language (MMIL) is given. This should support semantic information and specifications of hypotheses that help the MFM (see later) to make the best decisions, but currently I have not found a more exact description of this language.
|
|
|
</td>
|
|
|
</tr>
|
|
|
<tr>
|
|
|
<td bgcolor="" align="left" valign="middle"> </td>
|
|
|
<td bgcolor="" align="left" valign="middle">Persona</td>
|
|
|
<td bgcolor="" align="left" valign="middle">- RDF & OWL and a Java implementation of these models & languages;
|
|
|
- Special RDF resources that encapsulate the content of the messages exchanged on the input & output buses (InputEvent & OutputEvent);
|
|
|
|
|
|
- differentiation between two representations of data in OutputEvent, one machine-readable representation and one human-readable representation;
|
|
|
|
|
|
- XForms as a successful example for the above differentiation;
|
|
|
|
|
|
The usage of XForms is limited to its design principles and not the syntax: XForms divides the description of a form into two parts, one about what to present to the human user (with some hints about how to do that), and the other about the underlying data model. The first part is based on a set of UI controls that are linked to the latter (model) part of the form. PERSONA used its RDF- / OWL-based data representation in Java for the model part of the form descriptions and provided an implementation of the XForms UI controls in a Java package called the “dialog package”.
|
|
|
</td>
|
|
|
</tr>
|
|
|
<tr>
|
|
|
<td bgcolor="" align="left" valign="middle"> </td>
|
|
|
<td bgcolor="" align="left" valign="middle">URC</td>
|
|
|
<td bgcolor="" align="left" valign="middle">In the UCH architecture, the user interface socket is the UI model. It is described in the User interface socket description language, as specified in ISO/IEC 24752-2:2008.
|
|
|
The user interface socket (or short "socket") is an abstract UI model that is completely modality-independent. A socket has constants, variables, commands and notifications as its basic elements.
|
|
|
Resources are specified for each of these elements. Atomic resources are described in RDF and are bundled in resource sheets (ISO/IEC 24752-5:2008) which typically occur once per language. Examples for atomic resources are labels (text form, icons, etc.), help texts, and access keys. Supplemental resources can be provided by third parties.
|
|
|
</td>
|
|
|
</tr>
|
|
|
<tr>
|
|
|
<td bgcolor="" align="left" valign="middle">modality fusion when capturing input and modality fission when presenting output
|
|
|
</td>
|
|
|
<td bgcolor="" align="left" valign="middle">Amigo</td>
|
|
|
<td bgcolor="" align="left" valign="middle">Fusion and fission is an important aspect of Amigo and is done by the MFM (multimodal fusion) and the Multidevice Service (MS). The MFM is the main interface between individual I/O-Devices and the system and is responsible the merge inputs from many modalities. The MS assists in finding the optimal set of devices and interfaces by matching the interaction characteristics of each available device with the properties of each so called interaction request an application might formulate. So if a message should be shown and the user is with his family in the living room the message is shown at the TV. If strangers are present the system will send him a SMS.
|
|
|
</td>
|
|
|
</tr>
|
|
|
<tr>
|
|
|
<td bgcolor="" align="left" valign="middle"> </td>
|
|
|
<td bgcolor="" align="left" valign="middle">Persona</td>
|
|
|
<td bgcolor="" align="left" valign="middle">PERSONA UI Framework does nothing for modality fusion / fission. It is assumed that multimodal I/O handlers will provide their own specific solutions for such issues.
|
|
|
</td>
|
|
|
</tr>
|
|
|
<tr>
|
|
|
<td bgcolor="" align="left" valign="middle"> </td>
|
|
|
<td bgcolor="" align="left" valign="middle">URC</td>
|
|
|
<td bgcolor="" align="left" valign="middle">In the UCH architecture, the User Interface Protocol Module (UIPM) plays the role of the I/O channel manager. It is responsible for fusion and fission since it is the only component that knows about the controller devices connected.
|
|
|
</td>
|
|
|
</tr>
|
|
|
<tr>
|
|
|
<td bgcolor="" align="left" valign="middle">brokering between application layer and presentation layer
|
|
|
</td>
|
|
|
<td bgcolor="" align="left" valign="middle">Amigo</td>
|
|
|
<td bgcolor="" align="left" valign="middle">Another main part of the architecture is the MDM (Multi Dialoque Manager). This takes part between the modality services and the Amigo services/applications. It collects input-data, process it and output it to the applications. Also it gets explicit instructions from the applications and transfer the users demand to the corresponding devices. To realize this MDM is responsible for the communication with other components like the Context Service. The final selection of the I/O channel will be done within the MS, like mentioned before.
|
|
|
</td>
|
|
|
</tr>
|
|
|
<tr>
|
|
|
<td bgcolor="" align="left" valign="middle"> </td>
|
|
|
<td bgcolor="" align="left" valign="middle">Persona</td>
|
|
|
<td bgcolor="" align="left" valign="middle">The brokering is delegated to the input and output buses of the middleware; the principles are described in the context of the PERSONA middleware. As mentioned there, the I/O handlers register their profiles with their capabilities as an OWL class expression in form of an instance of OutputEventPattern. On the other side, the output bus completes all output events received from applications with the so-called adaptation parameters (with the help of the Dialog Manager; see also the general explanations before point #1) and checks it against available profiles based on a simple ontological matchmaking, where a match (here, an appropriate I/O handler for handling the output event) is found if the output event can be approved as an instance of the corresponding OWL class expression (OutputEventPattern). In the case of input events, it is done based on a simple lookup of which InputSubscriber is waiting for an input related to a certain dialog with a given ID.
|
|
|
</td>
|
|
|
</tr>
|
|
|
<tr>
|
|
|
<td bgcolor="" align="left" valign="middle"> </td>
|
|
|
<td bgcolor="" align="left" valign="middle">URC</td>
|
|
|
<td bgcolor="" align="left" valign="middle">Again, the UIPM does the brokering. In the UCH architecture, the internal architecture of a UIPM is left open – it can be anything from very simple to very complex in terms of adaptation to a user profile. In a simple case (currently implemented), the UIPM acts as a Web server serving HTML+JS user interfaces to the user, picking an appropriate HTML+JS template based on the user+device profiles. The actual match-making can happen either in the UIPM, or in the resource server that the UIPM connects to.
|
|
|
</td>
|
|
|
</tr>
|
|
|
<tr>
|
|
|
<td bgcolor="" align="left" valign="middle">Adaptation</td>
|
|
|
<td bgcolor="" align="left" valign="middle">Amigo</td>
|
|
|
<td bgcolor="" align="left" valign="middle">The Adaptation is done within the MDM as well as in the MFM and MS. The whole adaptation seems to be a part of these components. The I/O-Devices itself are not involved within the adaptation process. One part of the communication bus a “Privacy and Personal Security” modul. So privacy is not directly a part of the UI-Framework, but the system assures that only relevant messages will pass to it.
|
|
|
</td>
|
|
|
</tr>
|
|
|
<tr>
|
|
|
<td bgcolor="" align="left" valign="middle"> </td>
|
|
|
<td bgcolor="" align="left" valign="middle">Persona</td>
|
|
|
<td bgcolor="" align="left" valign="middle">The PERSONA UI framework provides adequate support for adaptivity while letting the user and context models open and pluggable. It does not try to cover all required adaptation mechanisms in a centralized way, but it delegates the tasks to pluggable applications and I/O handlers based on a natural division of work and overtakes only the brokering task that in fact leads to a context-aware and personalized selection of modality and eventually device.
|
|
|
How this works is already explained throughout the explanations so far (see also the output bus from PERSONA Middleware). Here, we provide only some complementary info based on the following image:
|
|
|
- The I/O handler selected for the presentation of the output must then “render” the modality- and layout-neutral representation of the content, in accordance to the adaptation instructions, in the specified modality under the consideration of the specified language, privacy, and the modality-specific tuning parameters. The result of the conversion must then be presented to the user using an appropriate device at the specified location.
|
|
|
|
|
|
- During a dialog is running, the Dialog Manager can notify the output bus when any of the adaptation parameters changes; the output bus may then either notify the I/O handler in charge of that dialog to redo the above step (if the changes in the adaptation parameters still match its profile) or switch to another I/O handler (if the new situation cannot be handled by the previous I/O handler). In the latter case, the previous I/O handler is notified to abort the dialog while returning any intermediate user input collected so far, and then the new I/O handler is mandated to continue with the dialog presentation. As a result, follow-me scenarios can be realized as well as switch from a big display to a personal device when other people enter the same location where presentation takes place.
|
|
|
|
|
|
- In order to reduce the delay for fetching the adaptation parameters, the profiling component delegates rules for keeping the adaptation parameters up-to-date to the Situation Reasoner so that re-calculation and update of these parameters is always running in the background and the Dialog Manager can be sure that the values found in the database are always up-to-date and valid. This way, the fetch process will be as simple as querying the database.
|
|
|
|
|
|
- As mentioned above, the rules determining the value of dynamic profiling and adaptation parameters are delegated to the Situation Reasoner. As the profiling component and the Context History Entrepôt share the same database for storing their data and the same database is used by the Situation Reasoner for evaluating the rules, powerful rules can be defined that combine both the contextual and the personalization data. These rules are defined in terms of SPARQL5 and indexed according to their dependence on database events that should trigger the evaluation of the rule.
|
|
|
</td>
|
|
|
</tr>
|
|
|
<tr>
|
|
|
<td bgcolor="" align="left" valign="middle"> </td>
|
|
|
<td bgcolor="" align="left" valign="middle">URC</td>
|
|
|
<td bgcolor="" align="left" valign="middle">"There are two levels of adaption:
|
|
|
1. The UCH finds a suitable UIPM, based on the targets available and the user+device profile given.
|
|
|
2. The UIPM itself may adapt to the user, using any mechanism it wishes, based on the user+device profile.
|
|
|
The UCH defines a security model for the handling of user+device profiles: On the user interface side, there is currently no global mechanism for the protection of privacy by the UIPM. (But there is an option of marking a socket variable as ""password"", as is in XForms.) However, the user interface socket can be extended to contain privacy levels for its individual elements which would have to be respected by the chosen UIPM (may be certified)."
|
|
|
</td>
|
|
|
</tr>
|
|
|
<tr>
|
|
|
<td bgcolor="" align="left" valign="middle">Concrete set of IO Handlers and supporting UI services</td>
|
|
|
<td bgcolor="" align="left" valign="middle">Amigo</td>
|
|
|
<td bgcolor="" align="left" valign="middle">- voice service
|
|
|
- gesture service
|
|
|
- GUI service</td>
|
|
|
</tr>
|
|
|
<tr>
|
|
|
<td bgcolor="" align="left" valign="middle"> </td>
|
|
|
<td bgcolor="" align="left" valign="middle">Persona</td>
|
|
|
<td bgcolor="" align="left" valign="middle">- a pure GUI-based I/O handler using Java Swing
|
|
|
- a browser-based I/O handler for Web-based remote access
|
|
|
- a multimodal I/O handler with speech as the main modality combined with gesture recognition on th input side and synchronized parallel GUI-based output as visual feedback if desired.</td>
|
|
|
</tr>
|
|
|
<tr>
|
|
|
<td bgcolor="" align="left" valign="middle"> </td>
|
|
|
<td bgcolor="" align="left" valign="middle">URC</td>
|
|
|
<td bgcolor="" align="left" valign="middle">- Generic UIPM serving HTML+JS templates for any targets
|
|
|
- Special iPhone HTML+JS template for DLNA digital media devices
|
|
|
- Special drag-and-drop HTML+JS template for tablet PCs and touch panels
|
|
|
- Special fully accessible HTML+JS template for blood pressure meter
|
|
|
- Windows Mobile based UIPM for TV, calendar/reminder, kitchen appliances
|
|
|
- Flash-based UIPM for DLNA digital media devices
|
|
|
- Windows Media Center based UIPM for TV, blood pressure meter, and reminder
|
|
|
- UIProtocol-based user interfaces (running on PDA) for TV, house appliances, security cam, calendar, etc.</td>
|
|
|
</tr>
|
|
|
</table>
|
|
|
|
|
|
|
|
|
=== Mpower approach===
|
|
|
For interaction with the users two Proof of Concept applications (POCAs) were develop mainly to demonstrate the Mpower platform. Norwegian POCA demonstrates dynamic sharing of individual plans and information while Polish POCA demonstrates interconnectivity and integration of smart home technologies. Those applications are not open sourced, unlike previously mentioned services. Those POCAs are too closely related with the services which they demonstrate. Almost all UI components are used just for the visualization of the specific data (e.g. Door-, PulseOximeter-, Temperature- Gadgets).
|
|
|
Users interact with them (with the system) via GUI, mouse, keyboard and touch screen so there is no support for spoken or body language. For more information see Screenshots and demonstration videos of Mpower POCAs here: https://project.sintef.no/eRoom/ikt/ICT-20097-ICTAgeing/0_3e619
|
|
|
For this reason Mpower has no major components on which the UI Framework group description can be applied on.
|
|
|
|
|
|
=== Oasis approach===
|
|
|
User Interface Framework allows automatic UI self-creation for new connected services and self adaptation to the device used the context of use and the user needs and preferences. All the UI`s that comprises this framework have followed a specific user-centric methodology for its development called OPAF, so it was ensured that all the user requirements were accomplished. All the UI elements created following the OPAF methodology (explained here: http://irgen.ncl.ac.uk/oasis/?p=OPAF) are collected in a java swing library for PC applications and AWT for PDA applications. The developers can download that library and create different GUI according their users needs. The developer should create a different GUI for each user or situation. There are not dynamic adaptations of the user interface. When the user accesses to an OASIS service, the best GUI according his/her profile is loaded. If the environment or user status change, the GUI will change the following time the user accesses to the service.
|
|
|
|
|
|
<U>User input devices:</U>
|
|
|
* Touch screen terminal featuring a web browser
|
|
|
* microphone and speech recognition engine (not continuous speech, only concrete short commands)
|
|
|
|
|
|
<U>User output devices:</U>
|
|
|
* Touch screen terminal featuring a web browser
|
|
|
* Smart phone
|
|
|
* Loudspeakers
|
|
|
* PC
|
|
|
|
|
|
The user accesses to the best possible service from different services providers. Ontology provide the reasoning to this.
|
|
|
|
|
|
The GUI is loaded when the user accesses to a service. There are different GUIs with different level of accessibility. According the user profile and the device, each time an user accesses to a service, the system load the most suitable GUI according user profile and ambient conditions. There is no running time adaptations.
|
|
|
UI Adaptation Framework is a tool for developers. The tool provides all the graphical elements, designed according OPAF methodology, to create usable GUIs.
|
|
|
|
|
|
=== Soprano approach===
|
|
|
<U>User input devices were:</U>
|
|
|
|
|
|
* TV remote control for operating the TV GUI menu generated by servlets and presented by a web browser (integrated in the iTV module)
|
|
|
* Touch screen terminal featuring a web browser (web content generated by servlets)
|
|
|
* microphone and speech recognition engine
|
|
|
* in addition the events sent by a number of sensors to recognise user activity (motion sensors, door/window opening, use of electrical appliances etc.) were monitored.
|
|
|
|
|
|
The user input is:
|
|
|
* uplifted to predefined ontology-related statements to be processed by the SOPRANO reasoning engine (SAM and in this case its Context Manager). This is in the cases when the System needs user feedback in order to take decision in some situation (the system detects a risk and asks a question to the user, whose answer is uplifted in a form that could be processed by the System). In this fashion we process the audio input from the microphone and the speech recognition engine generates the proper formal statement.
|
|
|
* handled internally in the servlet engine (browsing through the menu, checking the appointments, house status, playing exercises etc.)
|
|
|
|
|
|
<U>Output devices:</U>
|
|
|
* TV
|
|
|
* (PC with) touch screen terminal
|
|
|
* loudspeakers
|
|
|
* mobile phones for SMS alerts
|
|
|
* additional devices like lamp (for visual alerting), pager, vibrating pillow etc.
|
|
|
|
|
|
There are two types of data presented to the user:
|
|
|
* static info derived from the system DB
|
|
|
* list of appointments, status of the house, a set of exercise lessons etc.
|
|
|
* ad hoc generated alerts. In this case the SAM decides as a high-level concept what kind of information should be presented to the user (like inform the user about pending medication), then based on the available media (output devices) and its internal workflow, it invokes the service dealing with the given device providing more specific message content (eg. name of the audio file to be played, details for constructing the servlet message etc.). Then this device service takes care to present the content to the user over its device.
|
|
|
|
|
|
Middleware/platform-wise, very simple dialogs of two forms are supported: questions with yes/no answers and notifications that need a confirmation. This is done by utilizing workflows written in the BPEL, where services can be executed and responses can be waited for. Multimodality is achieved by an extended version of the Diane Service Description framework in which as a response to a semantic service request simply multiple services are executed. In general, SOPRANO does not provide a sophisticated dialog and multimodality support. Our main goal was to keep it simple and shift complex dialogs and user interaction completely into the services and if possible not to share any of this information with the central system.
|
|
|
|
|
|
=== Comparison with ISO/IEC 24752 or Universal Remote Console™ (URC) ===
|
|
|
URC (Universal Remote Console) provides a model consisting of targets (a service component, e.g., appliances providing certain services) and controllers / URCs (e.g., devices that can render user interfaces with which targets can be controlled) as well as a set of XML specifications for describing (1) targets and (2) their "user interface sockets" as well as (3) pluggable concrete user interfaces along with (4) resources used in them. From among these specifications, the second one, the User Interface Socket Description Language, has shown its practicality more than the others in the course of implementing the "Universal Control Hub (UCH) architecture" (which is an approach for implementing URC by providing a middleware between targets and controllers) within the [http://www.openurc.org/ openURC] alliance. The UIM expert group should check the fitness of (1) the User Interface Socket Description Language and (2) the UCH middleware for the purposes of the group.
|
|
|
|
|
|
The notion of User Interface Socket (UIS) in URC (cf. [http://myurc.org/TR/urc-tech-primer1.0-20081124/#User_Interface_Socket the URC Technical Primer]) refers to abstract specification of access interfaces of targets that can be used by third parties when developing user interfaces controlling targets. A description of a UIS is therefore supposed to include info about state variables that can be queried or set, specific commands that a target accepts in order to perform a provided function, and possible notifications that the target might send to UI providers, e.g., because of a change in the target's state that needs the user's attention. In a [http://www.hi.se/Global/monami/04%20URC%20and%20WSDL%20-%20Towards%20Personal%20User%20Interfaces%20for%20Web.pdf presentation at the MonAMI workshop on Nov. 16th, 2010, in Passau, Germany], where also the openURC alliance was officially announced by a press release to the public, Mr. Gottfried Zimmermann, the inventor of URC, showed that for each UIS description, there can exist a WSDL description of a Web Service and vice versa (see slide no. 16 in the presentation above for a general mapping)[[#Footnotes|<sup>[5]</sup>]]. This was also the actual conclusion made by the UIM expert group in the end of a few virtual meetings (during August to October 2010) with Mr. Zimmermann, namely that URC is not related to the core business of the group (user interaction in uSpaces as smart environments) but to separating user interface development from the business logic of service components so that third parties can provide alternative UIs for accessing the same services. However, service-oriented architectures, such as universAAL, already provide for this feature per se. The argumentation in the above mentioned presentation is that the overhead of providing pluggable UI modules based on UIS descriptions is much less than doing the same using WSDL-based descriptions. The UIM EG have not invested any effort to assess the extent of the correctness of this statement because in universAAL, software artefacts residing on the application layer that provide UIs do not need to rely on WSDL but nothing more than shared ontologies and this is an overhead that is always given when a framework supports semantic interoperability[[#Footnotes|<sup>[6]</sup>]].
|
|
|
|
|
|
With regard to the UCH middleware, the following considerations prevented the UIM EG to further follow up:
|
|
|
* On one side, the main concept for personalization in URC is based on the provision of alternative UIs; therefore, the UCH middleware has not had the support for adaptivity (context-awareness + personalization) in its focus (as of August 2011) therefore it lacks a core requirement of UIM and a major feature needed in such smart environments as uSpaces.
|
|
|
* On the other side, the scope of the UCH middleware goes beyond the focus of the UIM EG, involving also the middleware, LDDI, and service infrastructure expert groups directly and all the other expert groups indirectly; hence, any decision here needs the involvement of all the technical work packages of the universAAL project.
|
|
|
|
|
|
The project board of universAAL is considering high-level interactions with openURC in order to evaluate further possibilities for collaboration. The UIM expert group has recommended to consider interoperability at the level of device binding and, more importantly, at the level of sharing UI resources<!-- (see also [http://forge.universaal.org/gf/project/uaal_ui/forum/?_forum_action=MessageReply&message_id=404"e=1&thread_id=48&action=ForumBrowse&forum_id=89 this post in the UIM forum])-->.
|
|
|
|
|
|
== Adoption of technologies ==
|
|
|
According to the design decisions and the analysis of the input projects, it was decided to use the PERSONA approach as base in UIM.
|
|
|
|
|
|
== Footnotes ==
|
|
|
{|
|
|
|
|- valign="top"
|
|
|
|[1]||What the universAAL expert group [https://github.com/universAAL/lddi/wiki LDDI] does for binding non-universAAL-aware special-purpose devices is basically following a similar logic, namely using Embedded Bindings or Legacy Drivers for providing wrappers that interact at the abstraction level defined by universAAL. The other way around, developers of universAAL-aware drivers for binding input and output channels are expected to follow the LDDI specifications, especially
|
|
|
* the provided design pattern with the access, abstraction, and integration layers, as well as
|
|
|
* the recommendation about unifying the representation of equivalent domains in the abstraction layer independently from the concrete representations put forward by the legacy drivers (the fourth part of the UI Model enumerated [[#UI Model|here]] is a step towards providing such unifying representations).
|
|
|
|- valign="top"
|
|
|
|[2]||M. Obrist, R. Bernhaupt, M. Tscheligi, Interactive TV for the home: An ethnographic study on users' requirements and experiences, International Journal on Human-Computer Interaction 24 (2), 2008, 174-196.
|
|
|
|- valign="top"
|
|
|
|[3]||The universAAL Reference Model for AAL (see the universAAL deliverable D1.3 Part II)
|
|
|
|- valign="top"
|
|
|
|[4]||N. Alm, J. L. Arnott, L. Dobinson, P. Massie, I. Hewines: Cognitive prostheses for elderly people, 2001. 806-810.
|
|
|
|- valign="top"
|
|
|
|[5]||Prior to that, the members of the openURC alliance had already proposed to go for [http://www.w3.org/2010/02/mbui/soi/alexandersson.pdf extending URC to Web Services]; hence, the referenced presentation can be seen as a continuation of this proposal.
|
|
|
|- valign="top"
|
|
|
|[6]||The openURC presentation at [http://aaloa.org/workshops/amb11 AMB'11] shows that the UCH implementation has also started to add support for semantic interoperability using ontologies anyhow.
|
|
|
|- valign="top"
|
|
|
|[7]||Langley, P.: User Modeling in Adaptive Interfaces. In: Proc. Seventh International Conference on User Modeling, pp. 357–370. Springer, Heidelberg (1999)
|
|
|
|- valign="top"
|
|
|
|[8]||Looije, R., te Brake, G., Neerincx, M.: Usability Engineering for Mobile Maps. In: Proc. International Conference on Mobile Technology, Applications, and Systems (Mobility 2007), pp. 532–539 (2007):
|
|
|
|} |