Chapter 6: User Interfaces for Mobile Media 255 Searching, as a human process, was introduced in section 6.1; a number of searching techniques were introduced in Chapter 5; and how metadata aids searching was explained in Chapter 4. Now we focus on the user interface: formulating the search query (search cri- teria) and displaying the search results (selection criteria). Often searching is based on fi nding keywords (textual data) from content or associated metadata. In some cases the search may be extended to exploring text that is related to the (embedded) content object. For instance, to fi nd a certain image, the system looks for words that are part of a message containing an image, or words in the para- graphs above and below the image in a document. Consequently, the simplest search user interface is a single textbox, where the user can type the query. If the system supports re-using previous queries (which supports human search behaviour, as discussed in section 6.1), the component is a dropdown list box (Figure 6-18), where the user can select a previ- ous search string and edit it before starting a new search. Also, the system can support saving searches and results for enabling further re- use of previous or frequent searches, and provide offl ine search result exploration. Obviously the user may enhance the search by adding advanced search criteria (Figure 6-18). The user can defi ne values and value ranges for attributes, such as time, sender, content type, location, object size, date, or other metadata. Furthermore, it is benefi cial that the user can prioritize the attributes, which means that the attribute of a high priority has a larger impact when calculating the relevancy of a result. However, it is not necessary to enter keywords. The relevance feedback method (section 5.2), allows the user to select an object or part of it as a criteria for a search. The leftmost image in Figure 6-28 demonstrates this by showing a contextual pop-up menu of an image embedded in a message. After selecting “Find related” menu item, a view on the right image is displayed. The view consists of “the seed image” and list of related items. The common attribute values are bolded in under the seed image. Instead of fi nding related information, another option would be to search for similar content. This differs from the previous method in such a way that the objects do not have any relations between them; instead they should contain similar features. These kinds of searching will become even more powerful as technologies for recognizing fea- tures (such as human faces in an image, video footage shot outdoors, or recognizing the song genre automatically) of the non-textual content evolve, thus introducing new search attributes.
Personal Content Experience Page 278 Page 280