Loading...

Video-based Requirements Engineering

4,304 views

Loading...

Loading...

Transcript

The interactive transcript could not be loaded.

Loading...

Rating is available when the video has been rented.
This feature is not available right now. Please try again later.
Published on Apr 15, 2010

Video-based Requirements Engineering for Pervasive Computing Applications: An Example of "Preventing Water Damage"

[ see also http://www.mere-workshop.org/ ]

In today's software-intensive system development, negotiation between customer and supplier of required functionality is typically text-based. While text may be expressive enough to describe simple interactions between end-users and systems, it is severely limited when trying to unambiguously describe complex interactions. Those potentially involve a plethora of stakeholders, mobile devices, and maybe even require revolutionary interaction paradigms. For such innovative systems, we propose to employ "video-based requirements engineering".

Our film submission shows how a filmic representation of a scenario increases its understandability. The story is an example from the pervasive computing domain of home automation: The system protects the owner of an apartment from expensive water damage. After this exposition, the film shows how our tool may front-load a video-based analysis into subsequent system development steps: The same scenario is used in a short overview of the currently available tool support.


Transscript of the narration

The movie you just saw was the filmic visualization of a scenario for a home automation system.
We call this use of video technology "video based requirements engineering". It helps a diverse set of stakeholders get a better picture and common understanding of how a future system might work in real life, especially when the types of interaction go beyond standard mouse and keyboard.

To support the creation and further use of these videos, we developed a tool called XRave, the requirements analysis video editor.

XRave has two main functionalities:
First, it allows the creation of multi path videos, so that alternative flows of events can be reviewed and the design space can be evaluated visually.

You can also use stubs for alternative paths that you want to model, but where no video material exists yet.

Second, you can annotate relevant people and objects in the video, as well as their constellations and interactions that lead to certain system behavior. This ensures agreement between stakeholders on focus and system scope.

The structuring and annotation of video material enables users to search and reuse clips in subsequent iterations or projects.

Xrave can also generate a UML-like sequence diagram entirely based on the annotations. This provides a more formalized representation of the ongoing action and can serve as a starting point for functional or structural system decomposition.

In the future, we want to integrate automatic object detection algorithms, as well as a mechanism to embed user interface prototypes from early phases of the requirements engineering process.

Loading...

When autoplay is enabled, a suggested video will automatically play next.

Up next


to add this to Watch Later

Add to

Loading playlists...