L. Feijs, and J. Hu, “Component-wise Mapping of Media-needs to a Distributed Presentation Environment,” in The 28th Annual International Computer Software and Applications Conference (COMPSAC 2004), Hong Kong, China, 2004, pp. 250-257.
FULLTEXT: PDF HTML REFERENCE: BibTeX EndNote DOI: 10.1109/CMPSAC.2004.1342840

Component-wise Mapping of Media-needs
to a Distributed Presentation Environment

Loe Feijs, Jun Hu
Department of Industrial Design
Eindhoven University of Technology
Den Dolech 2, 5600MB Eindhoven, The Netherlands
{l.m.g.feijs, j.hu}@tue.nl



Abstract
Whereas formal specification and verification have shown value by improving reliability and trustworthiness of traditional industrial systems, we made a contribution by applying them to the field of distributed multimedia presentations in an Ambient Intelligence context. We investigate a mapping problem in which media needs are to be satisfied using given presentation resources. The goal of the investigation is to see whether Broy’s stream-based component framework can be used to model media-related interfaces and constraints in an elegant way. The formalization will serve as a framework for the development of an automated mapper that can handle real media needs and real presentation resources. It combines the well-known notations of Z with an underlying concurrency theory. We show that not only verification issues can be handled such as bandwidth and delay constraints, but also architecture-level issues such as network structural media-type compatibilities.
1 Introduction
2 Preliminaries
3 Specification
4 Presentation resources
5 Mapping
6 Correctness
7 Concluding remarks
Acknowledgement
References

1 Introduction

In the vision of Ambient Intelligence, technology becomes invisible, embedded in our natural surroundings, present whenever we need it, enabled by simple and effortless interactions, attuned to all our senses, adaptive to users and context and autonomously acting  [1]. Interactive media presentations will no longer be folded in a flat screen with 5.1 speakers, but instead, will be distributed into the user’s environment. The environment with distributed and networked interfaces, functioning as interactive theater, will engage people in more immersive experiences. In a previous study done in Philips Electronics, we have shown how to structure the content and system to present interactive media to a distributed environment that consists of not only screens and speakers, but also lights and robots  [76]. It was done in the context of EU funded R&D projects (NexTV  [11] and ICE-CREAM  [9]). (Figure 1 shows two users interacting with the ICE-CREAM demonstration, exhibited in IBC 2003  [8]).


PIC

Figure 1: ICE-CREAM theater at IBC 2003

One of the problems is the variety of such environments - thinking of how different people arrange their living rooms. This has been a long-existing problem with regard to web browsers. In order to give the same look-and-feel on a single web page to the users, the poor authors often have to work very hard to use all kinds of tricks, and when this fails, write different versions for different browsers. On the other hand, web authors are lucky - if they only consider Internet Explorer, they will possibly cover 4/5 of the audience, and if they are kind enough to consider Mozilla users, they will almost cover them all. We are not that lucky. We can not assume that there are only Explorer-like living rooms and Mozilla-like living rooms. They are different, in terms of available components and connections. It is impossible to have one version to fit them all. Instead we should have only one abstract media description for all living rooms, then map the media needs to every physical room, to create the experience for end users as intended as possible. Similar mapping problems are studied informally in  [10].

We aim at the formalization of such mapping problems. Each mapping problem is defined by a set of media needs and a set of presentation resources. The media needs are abstract description of the media content as well as its requirements for the physical components and their connections, including for example audiovisual players, robotic interactors and lights. These physical components are presentation resources. We employ Broy’s stream-based component framework  [2]. The following issues must be addressed:

1.
component interfaces for control and for events;
2.
user interfaces: touch screens, switches and buttons;
3.
throughput requirements and network delays;
4.
embedding of presentation components into devices;
5.
standardization of specifications (keeping the math away from the media developers).

The formalization will serve as a helpful framework for the development of an automated mapper that can handle real media needs and real presentation resources.

The mapping is to find the presentation resources for a set of media so that each media content is properly presented. In a typical implementation a mapping is a process that at run-time deals with control and events, and that has access to a number of reserved presentation resources. A presentation could be built by a “mapping server”, which behaves as a factory  [3], launching presentations. Of course the mapping server takes a mapping problem (= media needs + presentation resources) as its input, solves the mapping problem, makes the reservations, and then builds and launches the presentation. In fact, a presentation could be structured as a main process by the mapper server, upwards serving the media needs, and several distributed and auxiliary processes downwards employing the local resources from the devices in the environment. Typically, a distributed presentation is built by a local server (also a factory). Although one may expect that one such server per device would be sufficient, this is actually not the case; the reason is that several mutually incompatible software abstractions exist upon the the same device (for example, the Real player and the Microsoft media player on a Windows PC).

2 Preliminaries

To get started we consider a few simple media needs:

which have to be obtained from the following resources:

To model the media needs and presentation resources, we follow the approach of Broy  [2] based on timed streams. We have some drawing conventions of our own, which is derived from ADL Darwin  [54]. We use
PIC instead of PIC (See  [2], p.7).

In an interactive presentation system, it is necessary to distinguish user interface channels from data streaming channels. For this we add another convention, namely output channels modeling real, physical presentations such as sounds, video frames and robotic movements, and input channels modeling the components with which the user may interact with the system, such as a GUI interface on a screen and physical buttons on a remote control.

Broy demands that a set S of types be given. We put

S = {PAL,M PG,DB H,IBH,ℝ}
for PAL streams, MPEG streams, Dancing BeHavior, Interaction BeHavior and “things in the eal world”(for example, buttons, screens and robots).

In Broy’s approach there is a discrete time frame representing time as an infinite chain of time intervals of equal finite duration. We take 1 second for that duration. This allows us to formally represent the facts for example that there are 25 video frames per second in a high-resolution video stream and that each frame takes a number of bits.

We assume functions on data:

bits :  PAL→ {0,1}*    ... also forM PG,D BH
  pr :  PAL→ ℝ        (pr forpresentation)
  pr : MP G→ ℝ        (pr forpresentation)
  pr : DBH → ℝ        (pr forpresentation)
  pr :  IBH → ℝ        (pr forpresentation).
and obtain lifted versions of pr at timed-stream level:
pr′ :   PAL *→ ℝ *     by(pr′(p)).i = pr(p.i)
pr′′ : (PAL*)∞→ (ℝ*)∞  by(pr′′(x)).t = pr′(x.t)
and so on. The formal model has some structure of its own (Figure 2).

PIC

Figure 2: Formal model structure

In order to map this we must assume devices and connections. Figure 3 shows a possible configuration with the following devices:

and the following connections:


PIC

Figure 3: A possible configuration

There are two possible ways of presenting the dancing behavior: either by an animated character on the PIP of the hi__res screen, or by the physical robot together with the RC providing user input. But the STB - modem bandwidth is not enough for the PIP to do it. So the robot must dance and the RC must provide the up-down interaction.

3 Specification

We need some abbreviations. For timed stream z and n1, n2 ,

rate(z,n1,n2) == ∀t : ℕ \{0} ∙ #(z.t) = n1
   ∧ ∀ i : ℕ ∙ 0 < i ≤ #(z.t) ⇒ # bits(z.t.i) = n2
with the intuition that e.g. rate(z,100,100K) means that 100 frames fit into once second and that 100K bits go into each frame (Figure 4).

PIC

Figure 4: rate(z,100,100K)

For timed streams x and x, and Δ ,

delay(x,x′,Δ ) == ∀ t : ℕ\{0} ∙ x.t = x′.(t+ Δ )
and for m ,
maxdelay(x,x′,m) ==
   ∃ Δ : ℕ ∙ Δ < m ∧ delay(x,x′,Δ)

The rate information is put into the source devices S1, S2, S3 and the sink S4.

-S -------------|   - S -------------|
|in1 cable : PAL     |in2 the sign : MP G
|out z : PAL        |out z : M PG
|-------            |-------
|z = cable           |z = the-sign
-rate(z,100,100K-)-|   -rate(z,2,5K-)-----|

|S3--------------|   |S4-------------|
|in  happy puppy :   |in   z : IB H
|    DBH             |out updow n : IBH
|out z : D BH        |-------
|-------             |z = updown
|z = happy-puppy     -rate(z,1,1)------|
-rate(z,10,100)----|

The relation between media formats and the real world is described in P1,..., P4.

|P1 -------------|   |P2-------------|
|in  w : PAL         |in   w : M PG
|out screen : ℝ       |out screen′ : ℝ
|-------  ′′         |-----′-  ′′
-screen =-pr-(w)--|   -screen-=-pr(w)--|

-P3-------------|   - P4-------------|
|in  w : D BH        |in  button : ℝ
|out m oving : ℝ     |out w : IBH
|-------            |-------
-moving-=-pr′′(w)--|   -button-=-pr′′(w)---|

Matters of delay are modelled in the channels.

|C1-------------|   | C2-------------|
|in  z : PAL         |in  z : M PG
|out w-: PAL        |out-w-: MP G
|maxdelay(z,w,2)  |   |m axdelay(z,w,3)  |
-----------------   -----------------
|C3-------------|   | C4-------------|
|in  z : DBH         |in  z : IBH
|out w-: D BH       |out-w-: IBH
|maxdelay(z,w,4)      |m axdelay(z,w,4)
----------------|   -----------------|

Finally the whole system is specified by:

SYSTEM = (S ∘ C ∘ P )\{z,w}
        ∥(S1∘ C1∘ P1)\{z,w}
           2   2   2
        ∥(S3 ∘ C3 ∘ P3)\{z,w}
        ∥(P4 ∘ C4 ∘ S4)\{z,w}
which has a syntactic interface of (I, O), where
I = {cable,the sign,h′appy-puppy,button}
O = {screen,screen,moving,updown}

4 Presentation resources

The next step is to formalize the presentation resources, i.e. the components and the network connections. We assume that the components cause delays and that the connections give rise to bandwidth restrictions. The structure of the resulting model is shown in Figure 5. Clearly we will need renaming later, for example, [the__signurl1] (for modem) and [cablechannel] (for STB).


PIC

Figure 5: Structure of the resource model

Somehow the components must be configured to perform the required routing of streams. Although the current mapping-example does not require it, in general a component must be able to combine two incoming streams and put them on a single output port, or conversely, split what comes in to produce separate outgoing streams. In other words, some components must contain a switch.

A component containing a switch must have an extra channel of a special type, a control channel, accepting so-called commands, from a set . So from now on,

S = {PAL,M PG,DB H,IBH,ℝ,ℂ }

To get some experience in modelling switches, we try a simplified component, the Simple Switch SS:

PIC (x, y, z : DBH)

Any selection from x, y and z can be combined and offered on u, provided the rates fit. To adapt to the rate of u, which is assumed to be fixed, dummy data must be stuffed.

We assume the existence of disjoint copies of the set DBH, which we denote DBH1, DBH2, etc. We also assume conversion functions d1 : DBH DBH1, d2 : DBH DBH2, etc. and the corresponding inverses, e.g. d1-1 : DBH1 DBH. They give rise to lifted versions, e.g. d1: DBH*DBH1*. These assumptions allow us to specify streams being merged, e.g. x and y merged into u:

           ′′                ′′
u○c DBH1 = d1(x) ∧ u○c D BH2 = d2(y)
In other words, the mechanism of disjoint copies is used to model tagging of sub-streams conveniently in an abstract way. (In Broy’s theory, S○c x  means the stream obtained from x by deleting all its messages that are not elements of the set S.)

We introduce abbreviations:

                         #(∑x.t)
maxrate(x,n) == ∀ t : ℕ\{0} ∙ # bits(x.t.i) < n
maxrate(x,y,n) == ∀t : ℕ \{0} ∙i=1
    #(∑x.t)           #(∑y.t)
        # bits(x.t.i)+     # bits(y.t.i) < n
     i=1             i=1

Although a real network connection carries bits, we use a more abstract model to reflect the general-purpose nature of connections:

      ⋃
B =       DB Hi
    i=0,1⋃,2...
  ∪ i=0,1,2...PALi
  ∪ etc.    (alldatatypesused).
where DBH0 DBH (So this can be used for the simple case where tagging is not really needed).

|-SS---------------------------------|
| in  x,y,z : DB H
| in  s : ℂ
| ou t u : B
|---------------
| s.1.1 = xy2u ⇒
|    m axrate(x,′y,100)
|    ∧ delay(d1′(x),u○c DBH1,Δ SS)
|    ∧ delay(d2(x),u○c DBH2,Δ SS)
| s.1.1 = yx2u ⇒
|    m axrate(x,′y,100)
|    ∧ delay(d2′(x),u○c DBH2,Δ SS)
|    ∧ delay(d1(x),u○c DBH1,Δ SS)
| s.1.1 = x2u ⇒
|    m axrate(x,′100)
|    ∧ delay(d1(x),u○c DBH1,Δ SS)
--etc.-(y2u,z2u,xz2u,...)----------------|

On the basis of this example, we consider the real components, assuming the real delay occurs in the components as show in the following table. This means that a constraint such as delay(x, u,ΔSS) has to be rewritten as delay(d′1′(x),u○c DBH1, ΔSS) . We assume that the delay is fixed, independent of the load and the precise routing. Otherwise, more sophisticated schemes can be devised if necessary. Note that ○c ′ is the lifted version of ○c   . The rate constraints are modelled as if they belong to the input ports.




component
delay
value






STB ΔSTB 1
modem Δmodem 1
hi__res Δhi__res 1
lo__res Δlo__res 1
level2__robotΔlevel2_robot 0
RC ΔRC 0



PIC

|-STB -------------------------------|
| in  channel : PA L
| in  control : ℂ
| in  x,y : B
| out exec : IB H
|-out-u,v : B---
| control.1.1 = channel2v ⇒
|     maxrate(channel,100M )
|     ∧ delay(channel,v○c PAL,Δ  )
| control.1.1 = x2exec ⇒        STB
|     maxrate(x,100K )
|     ∧ delay(x○c DB H,exec,Δ  )
| control.1.1 = channel2v x2exeSTcB ⇒
|     maxrate(channel,100M )
|     ∧ maxrate(x,100K)
|     ∧ delay(channel,v○c PAL,Δ  )
|     ∧ delay(x○c DB H,exec,Δ  ST)B
| etc.                     STB
-------------------------------------|

PIC

|-hi res& PIP-------------------------|
| in  x : B
| in  control : ℂ
| ou t u : B
|-ou-t screen,PIP-: ℝ
| control.1.1 = x2screen ⇒
|    m axrate(x,100M)
|    ∧ delay(pr′′(x○c PAL),screen,Δ      )
|                             hi res&PIP
| control.1.1 = x2screen-PIP ⇒
|    m axrate(x,1′′00M)
|    ∧ delay(pr′′(x○c PAL),screen,Δhi res&PIP)
|    ∧ delay(pr (x○c DB H),PIP,Δ hi res& PIP)
--etc.--------------------------------|

PIC

|-RC --------------------------------|
| in  x : B
| in  control : ℂ
| in  button : ℝ
--ou-t u-: B-----
| control.1.1 = button2u ⇒
|    m axrate(u,100)
|    ∧ delay(button,pr′′(u○c IB H),Δ  )
------------------------------RC-----|

PIC

|-m odem -----------------------------|
| in  url1,url2,url3,x,y,z : B
| in  control : ℂ
|-out-u,v,w-: B--
| control.1.1 = url2v ⇒
|     maxrate(url1,56K )
|     ∧ delay(url1,v,Δ     )
| control.1.1 = z21u ⇒  modem
|     maxrate(z,1M )
|     ∧ delay(z,u,Δ    )
| control.1.1 = url2mwod⇒em
|     maxrate(url2,56K )
|     ∧ delay(url2,w, Δ    )
| etc.          2    modem
-------------------------------------|

PIC

|-level2-robot-------------------------|
| in  x : B
| in  feeling : ℝ
| in  control : ℂ
| ou t m oving : ℝ
|-ou-t u-: B-----
| control.1.1 = x2m oving ⇒
|    m axrate(x,10K)
|    ∧ delay(pr′′(x○c DB H),moving,Δ level2-robot)
| control.1.1 = feeling2u ⇒
|    m axrate(u,10K)
|    ∧ delay(feeling,pr′′(u○c IBH),Δ       )
|                              level2-robot
| control.1.1 = x2m oving-feeling2u ⇒
|    m axrate(x,10K)
|    ∧ m axrate(u′′,10K )
|    ∧ delay(pr (x○c DB′′H),moving,Δ level2-robot)
-----∧-delay(feeling,pr-(u○c IBH),Δ-level2-robot)

PIC

|-lo-res------------------------------|
| in  x,y : B
| in  control : ℂ
| out u,v : B
|-out-screen : ℝ-
| control.1.1 = x2screen ⇒
|     maxrate(x,1M )
|     ∧ delay(pr′′(x○c M PG ),screen,Δ lo res)
| control.1.1 = y2u ⇒
|     maxrate(y,100)
|     ∧ delay(y,u,Δ lo res)
| control.1.1 = x2screen y2u ⇒
|     maxrate(x,1M )
|     ∧ maxrate(y,100)
|     ∧ delay(pr′′(x○c M PG ),screen,Δ lo res)
|     ∧ delay(y,u,Δ lo res)
| etc.
-------------------------------------|

5 Mapping

Next we propose a mapping. Abstractly, a mapping is a set of paths through a given network, one path for each of the four chains specified in SYSTEM. At a concrete level, the paths are established by feeding appropriate switching commands into STB, modem and lo__res. In general there is a freedom in choosing these paths, for example if alternative routing through the network exists. Even the choice of which screen or interactive input is to be used is not a priori fixed. The four paths are shown in Figure 6. In an informal way it is easy to check that the delay and bandwidth constraints are not violated and the mapping is feasible.


PIC
 





Media needs
∑ΔRequired∑Δ




path1 cable 2 ≤ 2
path2 the__sign 1 ≤ 3
path3 happy__puppy 1 ≤ 4
path4 updown 3 ≤ 4





Figure 6: Four paths and “wire names”

To formalize the connections we use renaming by adapting port names to “wire names”. The wire names are also shown in Figure 6. Note that w4a, w4b and w4c together form a path from RC via lo__res and modem to STB. The other wire names are not used. Unused ports must be hidden and ports that go to the external world must be renamed.

ST B′ = STB[cable∕channel,
       updown∕exec,w4 c∕x,w1∕v]
   ≪  control : channel2v x2exec
   \{control,x,y}
hi-res& PIP′ = hi res& PIP[w1∕x]
   ≪  control : x2screen

  ′\{control,u,PIP}
RC = RC [w4a∕u]
   ≪  control : button2u
   \{control,x}
lo-res′ = lo res[w4a∕y,w4b∕u,w2∕x,screen′∕screen]
   ≪  control : x2screen y2u
   \{control,v}
modem′ = modem [happy-puppy∕url1,the sign∕url2,
       w4c∕u,w3∕v,w4b∕z,w2∕w]
   ≪  control : url12v url22w z2u
   \{control,url }
         ′    3
level2-robot = level2 robot[w3∕x]
   ≪  control : x2moving
   \{control,u,feeling}

Finally the whole system implementation is described by

       ′      ′           ′    ′        ′
SYSTEM  = {STB ⊗ hi′ res& PIP ⊗ RC ⊗ modem
        ⊗level2 robot}
which has the syntactic interface (I, O), where
 ′          -         -
I′= {cable,the sign,′happypuppy,button}
O = {screen,screen ,m oving,updown}

6 Correctness

The next questions is: how to express the formal correctness of the implementation?

Broy proposes three ideas of refinement: property refinement, glass box refinement, and interaction refinement. The glass box refinement is a classical concept of refinement typically used to decompose a system with a specified black box behavior into a distributed system architecture in the design phase, which seems appropriate in our case. The general form of a glass box refinement is

F1 ⊕ F2 ⊕ ⋅⋅⋅⊕ Fn ⊆ F
(In Broy’s theory (see  [2]), there is also another form for state machines, which we don’t need here). The relation on component behaviors is defined by the rule that ˆF F stands for the proposition
    -
∀x : I∙ ˆF.x ⊆ F.x
where
   -     --    --     -
F : I→ P(O),ˆF : I → P (O).

Also recall the definition of for F1 F2

(F1 ⊕ F2).x = {y ↾ O : y ↾ I = x ↾ I
   ∧ y ↾ O1 = F1(x ↾ I1) ∧ y ↾ O2 = F2(x ↾ I2)}

Let RATEbe given by

|------------------------------------|
| rate(x.cable,100,100K)
| rate(x.happy-puppy,10,100)
| rate(x.the sign,2,5K)
--rate(x.button,1,1)--------------------|

Note that this is essentially the same as S1,..., S4 are saying about their z ports.

Consider an arbitrary x satisfying RATEwhere     |
x ∈ I′ where Iis {cable,...}. x is a channel valuation x : I′→ (M*) such that

∀i : I′ ∙ x.i ∈ (type(i)*)∞
where M = sSs = PAL MPG DBH IBH

Since Ihas only four elements, we can write this out by assuming

x : cable ↦→ xu    (xu ∈ (PAL*)∞)
x : happy-puppy ↦→ xd (xd ∈ (DB H*)∞)
x : the sign ↦→ xc  (xc ∈ (MP G*)∞)
x : button ↦→ xb    (xb ∈ (ℝ*)∞ )

Now this x satisfies RATE, that is, xu has 100 frames per second, each frame being 100K bits, etc. (And similarly, xd: 10, 100 respectively, xc: 2, 5K respectively and xb: 1, 1 respectively.)

Let y SYSTEM.x where y ∈ O′ where Ois {screen, screen, moving, updown} which we write out by assuming

                       *∞
y : updown ↦→ yi (yi ∈ (IBH* ∞) )
y : screen ↦→′ ys′ (ys′ ∈ (ℝ*)∞)
y : screen ↦→ ys (ys ∈ (ℝ*)∞ )
y : moving ↦→ ym (ym ∈ (ℝ ) )

The correctness requirement is

RATE′ ∧ SYSTEM ′ ⊆ SYSTEM
because SYSTEMonly specifies maximal rates where as SYSTEM has already fixed the rates.

The obvious proof strategy is to take an arbitrary x satisfying RATEand consider a y SYSTEM.x, i.e.

y ∈ (STB ′ ⊕hi-res&P IP′ ⊕ ⋅⋅⋅)\{w1,w2,...}).x
and then check that y SYSTEM.x, which essentially boils down to checking maxdelay constraints and checking the essential transformations of the form pr′′ for each of the specification paths, e.g. when going from cable to screen, or when going from the__sign to screen.

As an example we check that ys satisfies all constraints of SYSTEM.x. Since SYSTEM falls apart in 4 unconnected parts, it is enough to check that ys = y.screen satisfies the the constraints of (S1 C1 P1)\{z, w}:

PIC

∃z : (PA L*)∞ ∙ ∃w : (PAL*)∞ ∙
    z = x.cable ∧ rate(z,100,100K)
    ∧ maxdelay(z,w,2) ∧ y.screen = pr′′(w)
which is equivalent to:
rate(x.cable,100,100K)
    ∧ maxdelay(pr′′(x.cable),y.screen,2)
(since pr′′ works frame-wise.)

What we know about ys comes from RATE SYSTEM which for RATEmeans rate(cable,100,100K). For SYSTEM, its meaning is more complicated. First, note that

      ′  -      -
SY ST EM : I→  P(O)
also, SYSTEM= (STB′⊕ hi__res&PIP′⊕⋅⋅⋅), where
ST B′ = STB[cable∕channel,w1 ∕v,...]
   ≪  control : channel2v x2exec
   \{control,x,y}
hi-res& PIP′ = hi res& PIP[w1∕x]
   ≪  control : x2screen
   \{control,u,PIP}

Note that for ys it suffices to focus on STBand hi__res&PIP and forget about all other channels than w1 where can be replaced by , i.e.

      ′           ′     ′
SY ST EM = hi-res&P IP ∘ STB

We summarize the assertions from SYSTEMby performing the renamings and keeping only the relevant clauses in view of the chosen control command. STBsays:

maxrate(x.cable,100M )
    ∧ delay(x.cable,w1○c PAL,ΔSTB)
                         ◟◝◜◞
                          1
hi__res&PIPsays
maxrate(w1,100′M′) ○c
    ∧ delay(pr (w1  PAL),y.screen,Δ◟hi r◝es◜&PIP◞ )
                                 1
from rate(x.cable,100,100K) we decide that for all t
# (x.cable.t) = 100
and for each i, the i-th frame has
# bits(x.cable.t.i) = 100K
therefore
10∑0# bits(x.cable.t.i) = 10M
i=1
so the maxrate requirement of STBsaying
#(ca∑ble.t)# bits(cable.t.i) < 100M
 i=1
is not contradicted.

Next we check the delays. We combine the two one-step delay assertions, first by noting that pr′′ works frame-wise:

{        ′′         ′′
  delay(pr(′′x.cable),pr (w1○c PAL),1)
  delay(pr(w1○c PAL),screen,1)
and secondly by adding the delays:
      ′′
delay(pr (x.cable,y.screen,2)
which satisfies the clause from SYSTEM that
maxdelay(pr′′(x.cable),y.screen,2)
as required. The other SYSTEM constraints can be checked in a similar manner.

7 Concluding remarks

The case study shows how Broy’s framework can deal with special interfaces for control and events at an abstract level and with real-world interfaces. It can also deal with bandwidth requirements and network delays very well. The case study does not have a sophisticated performance model, but it is plausible that certain models can be made using the same modelling style (for example if the delay is a function of the bit rate).

The case study shows one specification and one configuration of presentation resources. Abstractly, the mapping between the two is a set of paths through the network. Concretely, a mapping is a set of control commands, to be given to those components that have switching capabilities. Maximal rate and delay requirements can be checked formally, although the calculations are not surprising. Media types are described by sets such as PAL and MPG, this implies that type compatibilities are handled formally too.

The model created focuses on the stream-based aspects; in this type of distributed systems these aspects are most important at the architectural level. At a protocol level, we expect that reactive behavior is most important, and complementary state-machine based models could be used  [12].

The following research questions remain as options for future research:

1.
How to model dynamic aspects such as changing configurations and servers that build presentations according to the “factory” paradigm;
2.
How to scale-up the approach when more and more technicalities have to be modelled (while keeping the math away from media developers);
3.
How to specify the general class of “mapping problems” (instead of a single instance, as we did now).

Acknowledgement

The authors would like to thank Emile Aarts and Maddy Janse from Philips Research Eindhoven for their support on the work and Yuechen Qian for his comments on the early draft of this paper.

References

[1]   E. Aarts and S. Marzano. The New Everyday View on Ambient Intelligence. Uitgeverij 010 Publishers, 2003.

[2]   M. Broy. A logical basis for component-based systems engineering. In M. Broy and R. Steinbr uggen, editors, Calculational System Design. IOS Press, 1999.

[3]   F. Buschmann, R. Meunier, H. Rohnert, P. Sommerlad, and M. Stal. Pattern-Oriented Software Architecture, A System of Patterns. John Wiley & Sons, Inc., Chichester, UK, 1996.

[4]   W. Eixelsberger and H. Gall. Describing software architectures by system structure and properties. In COMPSAC ’98 - 22nd International Computer Software and Applications Conference, pages 106–111, Vienna, Austria, 1998. IEEE Computer Society.

[5]   L. M. G. Feijs and Y. Qian. Component algebra. Science of Computer Programming, 42(2-3):173–228, 2002.

[6]   J. Hu and L. M. G. Feijs. An adaptive architecture for presenting interactive media onto distributed interfaces. In The 21st IASTED International Conference on Applied Informatics (AI 2003), pages 899–904, Innsbruck, Austria, 2003. ACTA Press.

[7]   J. Hu and L. M. G. Feijs. An agent-based architecture for distributed interfaces and timed media in a storytelling application. In The 2nd International Joint Conference on Autonomous Agents and Multiagent Systems, pages 1012–1013, Melbourne, Australia, 2003.

[8]   IBC. The world of content creation, management and delivery, 2003. Available from: http://www.ibc.org.

[9]   ICE-CREAM. The ice-cream project homepage, 2003. Available from: http://www.extra.research.philips.com /euprojects /icecream/.

[10]   C. Kray, A. Krüger, and C. Endres. Some issues on presentations in intelligent environments. In E. Aarts, R. Collier, E. van Loenen, and B. de Ruyter, editors, First European Symposium on Ambient Intelligence (EUSAI), pages 15–26, Veldhoven, The Netherlands, 2003. Springer.

[11]   NexTV. The nextv project homepage, 2001. Available from: http://www.extra.research.philips.com /euprojects /nextv.

[12]   A. Ulrich and H. König. Specification-based testing of concurrent systems. In A. Togashi, T. Mizuno, N. Shiratori, and T. Higashino, editors, Joint International Conference on Formal Description Techniques for Distributed Systems and Communication Protocols (FORTE 1997), pages 7–22. &Hall, 1997.

Valid XHTML 1.1 Valid CSS!