| 
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • You already know Dokkio is an AI-powered assistant to organize & manage your digital files & messages. Very soon, Dokkio will support Outlook as well as One Drive. Check it out today!

View
 

WP4 GUI configuration handbook

Page history last edited by klaus 13 years, 3 months ago

www.avalonlearning.eu

 

 


 

This document is licensed under a "Creative Commons Noncommercial-Share Alike 3.0 Austria“ Licence (“Creative Commons Namensnennung-Keine kommerzielle Nutzung-Weitergabe unter gleichen Bedingungen 3.0 Österreich”). Further details see: http://creativecommons.org/licenses/by-nc-sa/3.0/at/

 

Authors: Ruth Wagner, Patrick Lang, talkademy.org

 

 

This project has been funded with support from the European Commission. This publication reflects the views only of the author, and the Commission cannot be held responsible for any use which may be made of the information contained therein.

 

Second Life is a registered trademark of LindenLab Corp., San Francisco other mentioned trademarks are respected properties of their owners.  

 

Introduction

 

This document describes all elements of the Graphical User Interface Configuration including all operational components (which would formally be part of WP4-1-1) and the experience design (which would formally be part of WP2). We do integrate these components into one document to provide a consistent view on creating a simplified and improved action learning experience.

 

Audience

 

The special focus of this document is how to implement the GUI configuration, thus the intended audience of this deliverables are IT system integrators.

 

Motivation

 

This handbook shall system integrators enable to replicate this GUI configuration used in the AVALON project.

 

The main motivation for the specific GUI configuration is to allow participants to be involved in special action learning events without the need of installing additional software. (Any additional software but a web-browser and a flash based plug-in, which is already available for almost all internet-connected computers).

 

The intention is to enhance the reach of the AVALON project, allowing people to participate who:

  • do not have access to a computer with powerful graphic capabilities (as required for rendering a 3D environment);

  • do not have the required skill or privileges on their computer to install additional software which is connected to the internet (most companies would not allow this);

  • do not be willing to invest a lot of time installing additional software for a new learning experience, where they can not anticipate their individual cost / benefit relation.

 

We assume, that one of these three points may be an obstacle for many participants of the AVALON target groups. To overcome this obstacle we are planning to perform some demonstration session over the AVALON website, which can allow everyone with very little effort to participate.

 

This easy experience shall work like a “teaser” motivating more potential participants to get involved with more of the AVALON action learning events and to make the efforts to install the full client.

 

Methodology

 

The methodology used in this deliverable is based on the “Rational Unified Process” and the IBM Global Service Method. Similar methodologies using different names are posted by different software engineering vendors, with similar results. As the “Rational Unified Process” is widely used in the academic and commercial software engineering domain, we will use the relevant naming and structuring without an additional reference.

 

Learning Scenario "UKnow"

 

The learning scenario used for implementation of the demonstrator is one of the different scenarios created within the Experience Design consulting in work-package 2 of the AVALON project. In this chapter, only the outline relevant for implementing the demonstrator is made use of.

 

The motivation for implementing the “Uknow” scenario:

 

  • Movement: The participant is sitting on the candidate chairs, thus no intentional movements are required. This has the advantage that no movement control has to be learned by the participant in the first place. On the other hand we can implement subconscious gestures to animate the avatar supporting the current emotional state of the participant. This could be triggered automatically (the avatar is happy if gaining points) or explicit by the moderator (reading the emotion coming over the voice channel and triggering gestures manually).

     

  • Knowledge Areas: Uknow is a meta-framework game which can be adapted to different games quite easily by changing the questions / answers. Thus it can be extended very easily, it can also be generated from a larger pool specifically for the skill-level and learning aims of the participant.

 

  • Enhanceable: The Demonstrator allows the implementation of different rule-sets (“modi”) , how the user experience is orchestrated. By changing the rules (modi), different corresponding mental skills are trained.

 

  • Lag: Participants can attend directly at the 3D stage or attend via a video-feed, transmitted to a Web-browser. As there can be different delays in the internet transmission for different users, it is important to offer “modi” where delay (lag) in the signal transmission from 3D Stage to Web-browser does not matter.

 

  • Intuitive: Participants should be able to participate as quickly as possible, with a minimum effort to learn the rules of the game. One way to achieve this is to reuse elements from widely known TV-shows. The TV-like format allows us to create experiences that are familiar to the participant and can easily be communicated. Participants can easily and quickly join and develop accurate expectations.

 

  • Joker: The experience shall be as inclusive as possible, meaning that there are (potentially) no passive participants (viewers) like in a TV show, but only active participants serving as candidates, supporter (“Joker”), jury, etc. The participants can have the option ofwhich role they want to participate in. At the same time potential participants can be invited by other participants to get in touch and become more active in using the 3D environment.

 

General description and Rule-set of “UKnow”

 

Genre

 

Uknow is a typical cognitive and guessing game in a closed gaming-environment (TV-studio-setting). The game is useful for learning scenarios, to assess and complement previously acquired skills in a game based way. More specifically, the demonstrator is implemented to support the assessment and complementation of language learning.

 

Gameplay

 

UKnow is a game based learning event, which fits nicely into the action learning concept. The game works for 2 to 8 participants, who need to compete with their language skills against each other. For training purposes the game can be played alone. The language quiz provides questions from geography, history, culture, idioms and grammar. The participants can choose between single- and multi-player game play. The aim is to achieve a maximum number of assigned points. For each  "modi" (see below) there is a high-score for an all-time best performance.

 

Design

 

Every candidate has a desk with a buzzer. On the desk the individual point count is visualized, that both candidates and moderator can see the count. Visible for all is a screen which displays the questions of the quiz.

 

The visual game design incorporates some special requirements:

  • it needs to work with low resolution, thus requiring an optimal positioning of the participants, stage elements and camera positions

  • with a set of pre-defined camera position shall enable a dynamic visual experience which allows each participant to identify with his avatar on the 3D stage. The operation of the cameras will be implemented manually (to be operated by the moderator).

 

Fig. 1 Draft Stage

 

In Fig. 1 are some example pictures collected, to visualize the stage to be built in the 3D environment. The two drafts on the left side are the candidate's seat. On the third picture the stage implemented in Second Life is displayed, the distance to the display is approximately 17m. The question mark is a 3D object, placed between the screen and the candidates' seats, these are arranged in a half-circle.

 

 

Rule-set configuration "Modi"

 

In the experience design document, part of work-package 2 of the AVALON project, there have been 11 modi developed. The modi “who wants to be a millionaire” has been implemented in a virtual TVE studio.

 

This modi is inspired by the popular Central European TV-show “Millionenshow”. The rules are not identical but recognizable if one knows the original show. The basic idea is, that the participant has the possibility to earn points by answering questions correctly.

 

The player can ask for help by using up to three jokers once:

 

  • Audience-joker: The audience is asked for the right answer. The audience votes for the right answer and the candidate is presented the voting result. The candidate can consider the result, but is not required to use the answer of the audience. Anyone viewing the transmission shall be able to add a vote.

     

  • Phone-joker: The candidate can call a person and ask for advice. The moderator, managing the stream can include the call into the event. Therefore, people without any computer at all can be included into the event.

 

  • 50 : 50-Joker: If the candidate chooses this joker two possible answers are removed, thus the probability to choose the right answer is improved.

 

 

Graphical User Interface (GUI) Configuration –

Technical Implementation

 

 

Component Overview

 

To produce a demonstrator, allowing us to evaluate the GUI configuration proposed in the AVALON project, we have to consider the different parts of the user experience.

 

The technical implementation of the demonstrator includes the following components:

  1. the virtual TV studio in the 3D environment (camera control in Second Life)

  2. the stage implementation in that vTV studio

  3. the voice gateway, 3D client and capturing software installed on a controller laptop (allowing the moderator to control the event)

  4. the server infrastructure for streaming the video feeds

  5. the plug-in for the website to receive the video stream from the event

 

The virtual TV studio (a) to allow camera control in the 3D environment is implemented by a script widely used in Second Life, called “BijoCam1”. It allows to predefine certain camera positions upfront. The camera positions can be selected by chat commands. To allow the moderator to control the cameras more conveniently we did script the cameras to be triggered by hot-keys. Thus the change of the camera requires only one key-stroke. This allows the use of cinematographic effects, like close-ups, total view, inserts, etc.

 

The stage (b) was designed in work-package 2 during the experience design consulting, for details please refer to the previous chapter or the full document in WP2. The actual stage design is based on the specific constraints existing in Second Life, optimized to be viewed with limited bandwidth and resolution based on the optical model of Second Life.

 

The existence of the stage was a pre-requisite for the implementation of the demonstrator, as we would need this for testing the scenario. One of the success factors is that this kind of environment allows an immersive participant's experience without installing any special software. As the experience is based on social live interaction of humans, the context (the learning game) is a pre-requisite. For more details on the testing results of this demonstrator, please refer to the testing chapter.

 

The controller laptop (c) allows the moderator to control the experience for all participants. On the one hand, the moderator needs an extended GUI to execute this task besides the other tasks live, namely moderating the learning event.

 

The server infrastructure (d) for streaming the video is required to connect to the user interface's content, placed in (b) through the moderator's control (c), and to the user (e). The plugin on the website is actually the smallest component in the whole GUI configuration, as it only receives the visual impressions.

 

As most of the user experience is controlled by the moderator, the user needs only very limited controls over the event, which is executed using a voice / audio integration. This special configuration allows the user to participate “hands free” and without the need to install additional software.

 

Live Content Creation Process

 

The AVALON project is based on the idea of Live Action Learning Events in an online Environment, in this case, Second Life. In this chapter we discuss the back-end process of how the live content is processed and fed into the user interface.

 

The source of the content, and thus the place which needs to be impacted on by the user's interaction is the virtual studio, placed in a 3D environment. It contains audio and video information which needs to be mixed and processed for the streaming before it is transmitted to a streaming server which relays the content to the 2D Internet participants who are using a web-browser as their window into the 3D GUI.

 


 

 

Fig. 2 illustrates the content creation and distribution process. In the 3D environment (a) the stage (b) is displayed on the controlling laptop of the moderator (c). There an additional piece of software, a “virtual webcam” is capturing audio and video signals. Basically it is the view of the moderator display, which is transmitted. More specifically it is a part of the display, because as the moderator has also the duty of manipulating the avatars of remote participants one can usually expect that two views into the 3D environment will be produced on the controlling laptop.

 

The insert of text (*) into the video stream is optional, but can help improve the readability on questions if the used resolution is very low, preventing reading from text rendered inside the 3D world. For the demonstrator we did not implement this feature, but want to outline it to address the issue of bandwidth limitations which may be addressed by reducing the resolution of the video feed.

 

Input from the remote participant is mainly coming via voice over IP (in our case skype) and mixed into the output, which is streamed back to all participants. The audio handling is the technologically most challenging part, but also the most crucial because most of the interaction is voice triggered. There is the possibility to include also manual user commands (e.g. keyboard), but as the interaction experience is most significantly influenced by the auditive interaction, the implemented demonstrator is based on voice commands. These commands are implicitly administrated by the moderator, basicly like it would be in a in a TV show as well. Later it could be automated, as voice-command processing components are widely available, but again, much of the atmosphere is derived from the “human touch” in the interaction between the participants, thus with the demonstrator at hand we want to explore the critical success factors in the human interaction design first before “hard wiring” it by integrating additional technology.

 

The mixed audio / visual signal is transmitted to an external streaming server (d) which handles the multiplication of the feed to a wider number of users. The challenging part here is to find the maximal quality by minimal latency in the transition to the plugin on the web-site (e).

 

Summary of the process:

1. capturing of the display from SecondLife and creation of a video feed

2. starting of a Voice over IP (VOIP) conference with the 2D-Internet participants  

3. redirecting the incoming VoIP-audiosignal to SecondLife

4. redirecting of the SecondLife-audiosignal to the video feed

 

5. transmission of the audio and video signals (av-signal) to a streaming server

 

6. broadcasting of the av-signal from the streaming server to the user

 

 

Demonstrator Implementation

 

In this section the implementation is defined, which includes the starting configuration as well as the three implemented demonstrators. The three demonstrators differentiate themselves by the components tested. This includes the configuration of the SecondLife video-feed capturing, the mixing of video- and audio signal and the transmission to the streaming server – including the streaming server.

 

The perfection of the streaming will optimize the user experience. Optimize because the configuration is always a trade-off between bandwidth, thus audio/video quality and fps/resolution, thus readability of the GUI.

 

The three demonstrators are:

 

  1. The first, simplest demonstrator is designed to optimize the audio configuration, which serves as a feedback channel in the GUI configuration for the user. The virtual routing of audio signal, enabling all to hear without echo and too much delay is the aim of this demonstrator.

     
  2. The second demonstrator focus to optimize the scenario on the stage, in tuning the parameters for video and audio quality (bitrate, resolution) under consideration of expected bandwidth.

     
  3. The third demonstrator shall test different web-site plug-ins, connected to different streaming server technology. This will allow us to optimize the overall user experience using the GUI configuration.

 

For streaming we are using the freely available Usteam platform, which saves cost and is widely used thus can draw attention from a wider audience to the action learning events. As a reference we are also using an internal streaming server to have a better control over the components used. Ustream provides a shared platform, thus performance is unpredictable. An internal streaming server is under our own control, but needs special promotion to be found be potential action learner participants.

 

 

 

Fig. 3 illustrates the model of the implementation to display the involved components.

 

This model shows the planned implementation as a draft, to provide an overview over the different components.

 

  • The SL-server (Second Life Server) hosts the virtual world of Second Life. By connecting to the SL-Server through a Second Life client, it is possible to gain control over an avatar, a character within the virtual world.

 

  • The virtual studio is an independent calculator, where the signal for the stream is prepared and sent to the streaming-server.

 

  • An instance of the SL-client exists on the virtual studio and provides the basis for transmitting the Second Life events as a live stream.

 

  • To permit the viewer to play an active role, a Voice-over-IP (VoIP)-conference with the viewer needs to be established, who in that moment changes from a 2D-Internet observer to a 2D-Internet participant.

 

  • The VoIP-signal is then rerouted through the virtual studio via audio cables (Virtual Audio Cable) to Second Life. To successfully transmit the stream, the visual signal of the SL-client is mixed with the audio signal of Second Life and sent to the streaming-server.

 

  • The streaming server then provides the viewers with the live audio- and video signal (AV-signal), who can receive the stream via their browser.

 

Second Life-Server

 

The MUVE-Server (or Sim-Server) is, in case of the implementation of the prototype, maintenance free, because the vTV-Studio was implemented by using Second Life itself and those servers are being maintained by LindenLabs' own developers. Hence the disadvantage is the circumstance, that one cannot anticipate when LindenLabs will shut down their servers for maintenance. When using the MUVE-platform OpenSim, that disadvantage would not exist, because the owner himself would have to maintain the server.

 

Virtual Studio

 

The second part, which is referred to as virtual studio, has the assignment to successfully combine the video- and audio-stream (AV-stream) and send the completed AV-stream to a streaming-server. The virtual studio is implemented on a separate notebook, because the location of the stream could change. Therefore the studio has to be mobile.

 

Streaming

 

The third part, streaming includes the server-side implementation (streaming server). On the one hand, a streaming-platform and a server has been used, which provides the option to change adjustments on an administrative level. To allow conferencing with the virtual participant and respectively the 2D-Internet participant, a VoIP program was used.

 

Client

 

The fourth and last part is the client. The client is supposed to receive the stream via a browser and connect itself through the VoIP program to the moderators of the virtual studio and moreover to all 3D-Internet participants in Second Life.

 

General Audio-Confguration

 

The audio options form a computer internal and digital beacon for audio signals via the Virtual Audio Cable. To give a better image, circuitry was created, on which all the important audio in- and outputs with the respective links and repeater are shown.

 


 

Requirements:

  1. The signal of the microphone should be audible in Second Life and be part of the stream.
  2. The input-signal of Skype, everything the virtual participant says, should also be audible in Second Life.
  3. The Second Life signal should be audible on the headphones and the transmission.

 

If both sides of a VoIP-conference are recorded (input and output) or forwarded, fast feedback loops are generated. The program virtual audio cable ought to prevent those loops. The setup should make it possible to transmit both sides of the VoIP-conversation, without each one receiving their own feedback. Both sides are supposed to be transmitted into the virtual world, so that in-world participants are also able to hear what 2D-Internet participant and moderator have to say.

 

 

Configuration Demonstrator #1

 

For the first prototype the program WebcamMax5 was used to catch the Second Life video sequence and the AV-Signal. The forwarding of the stream is organized by the web interface of the streaming-platform Ustream, which sends the AV-Stream to the Ustream Streaming Server.

 

That is the easiest way of handling the problem, but offers just a little margin when adjusting the individual parameters. This prototype was built to find the ideal parameters when forwarding the audio signals through the program Virtual Audio Cable.

 


 

As server, as mentioned before, the original Second Life servers are being used. At the virtual studio, the video sources can be mixed together by the application WebcamMax. To reroute the incoming and outgoing audio signals, the application Virtual Audio Cable is in use.

 

The through a browser available Ustream Broadcast Console from Ustream is used for streaming the signal. The streaming platform ustream.tv uses the Flash Media Server (FMS) by Adobe©7 in the background. For the transmission of the VoIP-signal, Skype8 is in use and the audio configuration can be modified by Virtual Audio Cable, like explained in chapter 4.2.

 

Video configuration of the Hosted Clients

 

The visual picture is being chosen as a source by WebcamMax and regulates the quality. There is the possibility to change the quality and size to the following parameters:

176x144, 320x240, 352x288, 640x480. Moreover, you can set the number of pictures. By executing the following tests, it can be found out which resolution is necessary to receive an unlimited readability of the display from the quiz stage.

 

 

Stream configuration

 

The streaming-platform Ustream is free of charge. Everybody is able to create a user and form their own shows and channels. Via an integrated Ustream Broadcast console, it is possible to chose between a transmission of video- or audio-only and both at the same time, regulate the volume, chose video- and audio sources and control the quality via a slider, although the exact values are not visible.

 

This factor poses a problem for the productive use of streaming, because you cannot rely on exact values. The video and audio quality is strongly dependent on the bandwidth. To make sure, that the viewers can follow the stream at all times, the audio signal has a higher priority and should also be streamed at a higher quality than the video signal.

 

Ustream is based upon the Flash Media Server and uses the H.2649 video codec and HE.AAC10 audio codec. The streaming itself is transmitted by UDP11 and uses the RTMP 12 protocol.

 

 

Configuration Demonstrator #2

For the second prototype, nothing has been changed server side. The difference lies in an additional program, which on the one hand replaces the function of WebcamMax (catch Second Life Video, add text) and on the other hand replaces the streaming-options of the Ustream-Broadcast-Console. The program Wirecast 13 of Telestream 14 offers a broad spectrum of options regarding the quality of video and audio.

 

In Wirecast version 3.5.215, there is already the possibility of a Ustream-platform stream implemented. The audio configuration via Virtual Audio Cable can be adjusted as shown here:

 


 

Audio- and Video source configuration

 

Video source

With the additional program Desktop Presenter 16 (Telestream), it is possible to select a specific windows. It can also be used as a virtual Webcam program. Desktop Presenter must be executed and the Second Life window selected. To add the screen of the Desktop Presenter as a source in Wirecast, a new Desktop Presenter needs to be added to the Wirecast via the IP-address of the virtual Studio.

 

Audio source

To be able to select the mixed audio source, a new audio shot needs to be added in Wirecast and according to the audio configuration model, the right virtual cable must be selected as source.

 

Ustream Streaming Wirecast configuration

 

Under the menu item "Broadcast" the submenu Broadcast configuration can be found, which is responsible for the upcoming streaming. The broadcast configuration of Wirecast allows to load the Preset #Ustream#.

 

Wirecast requests a Ustream username and channel name, which was previously created at the Ustream-platform. Out of that, a RTMP-URL is generated and a password is demanded. To be able to stream from Wirecast on Ustream, it is also necessary to start the Ustream Broadcast Console. The configuration however is still related to the program Wirecast.

 

The Decoder preconfiguration in Wirecast offers the possibility to choose between different presets. This ranges from a Flash-Low bandwidth up to a Flash-HD bandwidth. There is also the possibility to select a user-defined configuration. As in the first prototype, one can change width and height of the video source by using WebcamMax. In addition, the frames per second (fps) and the maximal transferring bitrate can also be adjusted. There are of course equal audio-encoding configurations available.

 

The difference between the Ustream Streamcast Wirecast configuration and WebcamMax or the Ustream-Broadcast Console lies in the potential to enter data individually. Which means that you can freely choose the resolution and the quality of the audio and video configuration can be manually adjusted by entering the desired bitrate.

 

The actual bitrate is dependent on the offered upload rate of the internet connection of the virtual Studio and the client's bandwidth, who retrieves the stream.

 

Configuration Demonstrator #3

 


 

For the implementation of the third prototype, only the streaming server changes. The technology used for streaming stays the same, because of the unchanged streaming-server.

To rule out delays coming from the streaming-platform Ustream, a different streaming-server is being used.

 

The bandwidth of the configuration regarding audio and video quality (bitrate, resolution) stays the same. However, the Darwin Streaming-Server uses Quicktime 17, which is provided by the talkademy-network for testing purposes. For capturing the Second Life window, Wirecast is used, just like in the implementation of the second prototype. Wirecast is also responsible for streaming over the QuickTime-Server.

 

Streaming configuration

 

As already done in the implementation of the second prototype, user-defined configurations can be entered for the streaming itself. To be basically able to stream over a QuickTime-Server, one has to select the MPEG-4 entry in the Encoder-Options. In the new Encoder-Options, the following modulations have to be made:

 

  • The output format is Quicktime, the Video-Encoder is set to H.264 and the Audio-Codec to HE.AAC. The resolution is, as in any other streaming, dependent on the available bandwidth and quality (bitrate).

 

 

Test and Evaluation 

 

To determine if the implemented prototype satisfies all technical as well as content based requirements, tests have to be performed. From a technical point of view, it must be ascertained, that the prototype can live up to the needs of possible interaction with 3D and 2D participants. At the beginning, the assumption was mentioned, that delays can occur. This would slow down and even prevent the communication between 2D and 3D participants.

 

From a content based point of view, it must be ascertained, that the scenario develops in a way it was meant to, according to the corporate concept of Barbara Kocher in Second Life -- 'on stage' --  for the operation in vTV-Studio. In the context of this work, two tests have been conducted and further tests described, which are absolutely recommended for further development of the prototype.

 

Conducted Tests

 

As the assumption exists that delays can indeed occur during a transmission, that fact was screened through a test. That specific test is meant to show: if delays occur and how big the delays are. Because a transmission is dependent of various parameters like bit rate, frames per second and resolution, it can be tested by using different values, in what way those parameters and the delay itself are connected.

 

Another test was conducted to find out, if the assembly of the stage for the vTV-Studio scenario was appropriate. Here it was important to find out, if questions that were shown on the display could be read by 2D contestants sitting on the virtual chair. The readability is strongly connected to the streaming resolution. Subsequent to that, all tests and their results are explained.

 

Tests for evaluation of delay over the Ustream Platform and the QuickTime-Server

 

This test was conducted based on the assumption, that delays occur during any transmission. The test results were supposed to answer the following questions:

 

  • Does delay occur while streaming?
  • Is the delay connected to parameters needed for streaming and which can be adjusted?
  • Does the Streaming-Server affect the delay?
  • Is there a difference between the delay via Ustream and the delay via the QuickTime-Server?

 

Specification of the approach

 

Because of the fact, that the test references to the basic delay of the transmission, the VoIP-conference, an interaction between 3D and 2D participants was disregarded. Only the audio signal of Second Life is transmitted, because the assumption was made, that the required bitrate of the VoIP audio signal does not change, when adding it to the Second Life audio signals.

 

Wirecast has been used for the transmission of the AV Signal to the Ustream and QuickTime Server. In Wirecast, parameters can be changed by values, which provides a good basis for comparison. The first test, the Ustream-test, was based on the second prototype. The second test, the QuickTime test, on the other hand, was based on the third prototype. The only difference in the configuration of both tests is the fact, that the implementation of VoIP conferencing was left out.

 

At first, the test series over the Ustream platform was performed, because that was the easiest one in the matter of handling the test, which required no experience with streaming. The Ustream test was performed with 16 different parameter values.

 

Because of the similar results of all test cycles, the tests regarding the QuickTime Server made use of extreme values, which, if a connection between parameters and delay was existent, were supposed to show visual results.

 

To visualize the delay, a real time audiovisual Signal was used in Second Life. The time was measured upon the release of the signal, until the audiovisual signal could be seen and heard over the stream.

 

Tests for determining the necessary resolution

 

Fig. 8: Distance Display / Avatars

 

Description of approach

Because of the fact, that the readability is correlated to the distance, the tests were performed in an already existing UKnow-Quiz environment. On the stage are eight competitors chairs arranged in an arc, so that all Chairs point in the direction of the display, where all questions are shown. The tests were performed at a distance of around 16 meters. That is the actual distance between the display and the chairs. Resolutions from 160x120 px to 800x600 have been tested.

 

Test results - 160x120 px

 

Illustration 6.2 shows the results of test cycles with a resolution of 160x120 px. On the right, as well as on the left picture, with a distance of around 16 meters, one can see, that the text itself is fuzzy and even not at all readable. Even text with the biggest text size on the left picture can not be read.

 

The readability of the tests on the display improved, as the resolution increased. However, absolute readability without previous knowledge of the text itself, was not given until a resolution of 800x600 px. The results of this tests are shown in illustration 6.3.

 


 

For further development of the vTV-Studio, a resolution of at least 800x600 is recommended. If following tests show that this resolution generates too much data and therefore requires a higher data rate, the font size of the questions and distance between the display and the chairs have to be adjusted.

 

Additional Test Results

 

In order to find a way to transmit voice and video in an acceptable quality, further tests have been made.

 

ManyCam (Win & Mac) & CamTwist (Mac)

ManyCam is a free application available for PC and Mac. This application offers the possibility to provide the desktop as a webcam. Regrettably forced size for video is only available using the Windows-Version. The application does not allow providing specific application-windows, only the full desktop or predefined parts can be chosen.

http://www.manycam.com/

 

 

CamTwist, a free Mac-Application, allows providing the Desktop as a webcam-stream. By using the option Desktop+ a specific window can be defined for streaming. Regrettably it does not allow to resize the image-stream or to redefine the framerate.

http://allocinit.com/index.php?title=CamTwist

 

 

Audio-MIDI-Setup (Mac)

As the application VirtualAudioCable is not available for Mac, an alternative solution had to be found. Mac OS X natively offers the option to create virtual audio devices in the Audio-MIDI-Setup:

 

 

By clicking on Audio => Aggregate Device Editor (“Geräte-Editor öffnen”) new virtual devices can be created and audio devices assigned. A detailed explanation (in English) can be found here:

http://support.apple.com/kb/HT1215

 

Skype

Skype is a standard VoIP solution on the web. Even people with poor IT understanding know how to use this application. Skype could be used as a dial-in-application, to allow communication with other participants within SecondLife. Skype also offers a live-chat option that can be used to transmit live video. Tests have been made with ManyCam and CamTwist. CamTwist offers slightly better quality, both allow transmission of ingame video which is judder-free with minimal delay, but the video-quality is too poor, as it is hardly possible to read text projected on virtual walls in SecondLife as the following screenshot shows.

 

 

Newer versions of Skype offer the function to transmit the desktop directly (without the use of third-party programs) in good quality but low fps (frames per second). This causes on the one hand surges in the video-stream but on the other hand provides video-quality that allows to clearly read text projected on walls within SecondLife.

 

 

 

This solution only works for 1-to-1-communication, transmitting the desktop is yet not possible in group conversations.

 

http://showmypc.com/hosting/online-broadcasting.html

 

zaplive.tv & livestream.com

As the delay using Ustream.com is much too high to allow practicable real-time streaming, the alternatives zaplive.tv and livestream.com have been tested. Both have a delay as high as noticed at Ustream.com, livestream.com even gives the possibility to reduce the framerate, but even with low fps and the lowest quality a delay of some seconds appears.

 

ScreenStream

http://www.nchsoftware.com/screen/index.html

This Windows application allows us to provide the desktop of a presenter online. The application includes a webserver, which automatically provides the desktop within a webpage without the need of downloading any plugin (which even works on an iPhone! – see second image below). This software is combinable with another product by the same developers: Broadwave (http://www.nch.com.au/streaming/index.html) This combination allows to provide desktop and audio-streaming at once. The video-transmission works in respectable quality as we can see in the following screenshot.

 

 

 

But there is one major problem: The application was programmed for presenters of e.g. PowerPoint presentations of simple desktop-applications, but not video games. The application allows a refresh of the image only every 300 ms (maximum) or with any mouse click made on a window – but clicks within Second Life do not count. This results in an extremely high refresh rate, which makes the software impossible to be used for our purposes.

 

BroadCam

http://www.nchsoftware.com/broadcam/index.html

BroadCam offers a webserver, that allows us to directly stream a video on a website. By using ManyCam the desktop can be streamed as well. The application allows us to set the streaming method (ASV, Flash or JPEG-Stream) and the framerate, but not the video resolution. This leads to the fact that text projetected on walls within SecondLife is not readable (see the following images; image 1: JPEG-stream, image 2: ASV-stream, the Flash-stream did not work at all).

 

 

 

RealVNC Server

http://www.realvnc.com/

The RealVNC-Software was made to provide remote control over a computer on a network. RealVNC also includes a webserver that gives an internet-user the possibility to control a PC (or e.g. SecondLife) via a Java-Plugin in a browser. Regrettably audio-transmission is not supported. The web-plugin does not run on an iPhone or iPod-Touch as the Java Runtime Environment is not natively installed. The refresh rate is at about 1 frame per second and the color-depth is reduced (as we can see in the following screenshot), however, this application allows direct control over an avatar in SecondLife.

 

 

 

Google Talk

The windows-application Google Talk does not support direct transmission of the desktop, a virtual webcam driver (like ManyCam) has to be used. As there are no options to configure the quality of the video-transmission, the quality of the stream is as bad as with the Skype-solution.

 

iChat & other multi-messenger

iChat, the multi-messenger application built-in within Mac OS X, offers in the menu to start video-chats, but this option is deactivated when using Gmail as a provider. It seems like this option is only activated in case the chat-partner uses iChat as well. With other multi-messenger applications (miranda, pidgin) a video-connection could not be established as well.

 

BeamYourScreen.com

BeamYourScreen allows to directly stream the desktop of a PC or Mac on a website provided by BeamYourScreen. Even though the transmission quality is pretty good, the client watching is forced to see the full resolution (ie. has to scroll in case the client transmitting resolution is higher than the resolution of the client watching) and the lag is too high to provide appropriate streaming and communication.

 

 

 

showmypc.com

This services allows online-broadcasting of the desktop. As there is no test-version available (there are only professional versions starting from $14/month), this services could not be tested yet.

Comments (2)

klaus said

at 2:13 pm on Nov 4, 2010

This document is complete
Spell-check and QA is appreciated

Gary Motteram said

at 4:04 pm on Nov 19, 2010

This has been edited.

I felt this needed some concluding remarks and some recommendations. I wasn't sure which way to go in the end. Should I have been?

You don't have permission to comment on this page.