CAUSE/EFFECT

This article was published in CAUSE/EFFECT journal, Volume 22 Number 2 1999. The copyright is shared by EDUCAUSE and the author. See http://www.educause.edu/copyright for additional copyright information.

The Building of a Virtual Lecture Hall: Netcasting at the University of South Florida
by Ted Netterfield

The University of South Florida has successfully delivered its Symposia Series through the Internet to the desktops of individuals unable to attend the lectures, using streaming audio and video and IP multicast technologies. What began as a proof-of-concept prototype to test technological capabilities has become a valued service at the university. This article addresses not only the technical issues but also the accompanying politics and, inevitably, coping with Murphy�s Law.

The University of South Florida (USF) in Tampa and its three regional campuses are home to 35,000 students. Approximately 85 percent of these students commute to and from USF. Consequently, it is difficult for many students to attend lectures hosted by the university�s Symposia Series or departmentally invited distinguished lecturers. These events are an integral part of the academic experience. Over the past two years, to allow student participation in these events without extra trips to campus, the academic computing organization has made these events available via its Netcast Project.

Using an assortment of currently available technologies, both video and audio are streamed over the Internet via IP multicast in real time and collected for playback at the student�s convenience. The lectures are then available to cable modem subscribers, through dialup, at student labs on the main as well as regional campuses, and so forth. This article addresses issues of technology selection for different audiences, media integration, equipment selection, mobility considerations, production, and the accompanying politics of such projects.

From the Beginning

The Netcast Project at USF was christened in the spring of 1997 with a lecture by Alfredo Duran titled "A Call to the End of the U.S. Embargo of Cuba." In retrospect, we couldn�t have picked a better lecture to launch this activity. The lecture was embroiled with controversy. The Cuban community was picketing, there were security checks at each of the entrances, and the local broadcast media were on hand in anticipation of a conflict. In spite of these distractions, the netcast of this event came off with only a few problems.

We learned quite a bit from this first event. While we were focussing primarily on the technical issues, it became immediately obvious that there was much more to producing a successful netcast event. Beyond the data connection, encoding operation, and server configuration, other areas that need to be addressed are audio and video gear selection and operation, desktop support, getting the word out, updating the Web site, mobilizing the gear, coordinating with the speaker and other production units, a myriad of production issues, and accommodating Murphy.

This project was launched with the intent of providing a proof of concept, in response to a couple of concerns. First, as the network unit on campus responsible for the university�s backbone network, regional campus links, remote access, and several of the local area networks (LANs), we were becoming concerned about the effect that multimedia-streaming technology would have on our data communications infrastructure. Second, we believed that streaming technology was inevitable and we were not seeing any activity on campus in this area. Considering that desktop computers are prevalent throughout the university, this appeared to be an excellent opportunity for USF to deliver video and audio content.

Streaming technology was not entirely new for some of us involved with the project. We had configured and were maintaining IP multicast in support of the MBone since 1993 and were using MBone tools (VIC, VAT, and WB) to communicate among several of our system administrators. We did have to shift gears, though. The MBone tools ran exclusively on Unix-based computers at the time, and since the desktops that we would be targeting were predominantly Microsoft Windows-based, we would have to find other tools to work with. Also no one relished the idea of carrying a Sun Microsystems workstation around to the events. So the project began with a zero dollar budget because of which we were begging, borrowing, and appropriating equipment and working with a collection of people from varying backgrounds and departments who were willing to work on this in their spare time. The staff who participated were very enthusiastic and committed to the project.

Where�s the Content?

For this project to be successful we needed to identify a source of content that would be of interest to the USF community at large and one that this method of delivery would clearly complement and enhance. We did not have to look far to find this content.

USF hosts a Symposia Series of invited lecturers, who range from celebrities to invited lecturers sponsored by departments. Thus the symposia range in content from general interest to discipline specific with a certain amount of controversy associated with some of the subjects. These lectures are scheduled throughout the semester at almost any time during the day and evening and at a variety of locations. With a majority of the students commuting to and from campus and with conflicts between employment and class schedules, it is difficult for students to attend many of these lectures. In addition since these lectures are primarily hosted at our main campus, students from our regional campuses rarely have access to these events.

The USF Symposia Series was an ideal match for what we were hoping to accomplish. We presented our idea to the coordinator of the Symposia Series and were greeted with great enthusiasm and support. Thus, our journey began.

Audio

Delivering good quality audio turns out to be more complex than one would think. There are many different types of scenarios that require different types of equipment. For example, a speaker who tends to wander or provides a highly animated lecture would require a wireless lapel microphone, but a speaker who prefers to stand at the podium would need a microphone that has a narrow pickup range so as not to introduce any residual noise from the audience.

At our first netcast event we had made arrangements to borrow a wireless lapel microphone which we plugged into a sound card to deliver the lecture. This worked well until the end of the lecture was opened to questions from the audience. No one had anticipated that there would be other audio sources as part of the lecture. As a result, those who were listening to the lecture via the netcast were unable to hear the questions. This introduced the need for multiple microphones distributed about the room and an audio mixer to aggregate these additional audio sources. In addition to the requirement for multiple audio sources, we found we also needed to attend to microphone placement, room acoustics, and problems with interactive audio.1

After several events we noticed that many speakers tended to speak close to the microphone and there would be an annoying loud low-frequency pop that originated from the smacking of their lips. This was very distracting to the listeners. To minimize this distraction we had to acquire a gate limiter to filter it out. We now have a collection of condenser mikes, a wireless lapel mike, an audio mixer, an assortment of mike stands, the gate limiter, and headphones.

We found ourselves often cutting corners and purchasing the least expensive items to keep costs down. While this worked fine for many items, we found that in the case of headsets it is very important to pay the premium. At the large events the noise is generally overwhelming and, in order to hear what you are transmitting, it is essential to have the best set of headphones you can find. At many of the events we are simply handed off an audio feed from another group which is responsible for audio production. At the point of handoff, microphone selection and placement are no longer our problem. However, we lose a lot of control and have to deal with a host of other problems: changing power levels, unwanted artifacts such as hum, and loss of signal altogether. At the first event where we had to interface with another audio provider, we were asked what type of cable we needed to plug into our system. When we held up a 3.5 mm mini-jack connector that would plug into a sound card, they quickly informed us that we needed either an XLR connector or 1/4" phone connector. Since we generally never know what we are going to have to interface to when we arrive at an event, we have stocked up on every kind of cable and connector imaginable to be prepared for any situation.

Providing good clean audio has become easier with each event. We have gained a comfort level with our equipment and now understand most of the limitations we face. We have also become acquainted with most of the audio providers at the events who now know exactly what we need when we arrive.

Video

Initially, we thought video would be a much easier medium to deal with. Our first experience involved the use of a camcorder, a simple home variety tripod, and a frame-grabber card for our encoder. This event went fairly well for a first effort, but it became immediately obvious that we would have to invest in some professional-grade equipment if we were going to continue. During that first event one of the legs of the tripod collapsed several times and the zoom on the camcorder was not of adequate power to get a tight enough shot of the speaker from our location.

As we produced additional events, a general trend evolved. At the large events there is good lighting provided on the speaker, but we are generally provided space for equipment a good distance from the stage. At the smaller events lighting is generally no more than florescent overhead lights, but we are close to the speaker. This poor lighting at the smaller events often causes serious problems when other lighting sources are introduced, such as an overhead projector which leaves us with a silhouette of the speaker. Additionally, at the smaller events the speakers tend to provide additional content to the lecture in the form of overheads, computer-generated output, or the use of chalk/white boards which we need to accommodate.

Thus, considering the range of content and situations we would have to deal with, we began looking for a digital camera with at least a 16x zoom lens, a tripod with a floating head, a scan converter to capture computer output, a document camera, portable lighting, and a video mixer through which to tie all the inputs together for the capture card.2 While the video collection side of the operation has not provided us with any major technical difficulties, it has become the most cumbersome part of the operation.

The Big Squeeze

Raw uncompressed video for a 320x240 picture size at 30 frames per second (fps) will generate a 53 megabits per second (mbps) stream. With streams of this size, it would not take much to saturate even today�s 100-mbps LANs and gigabit backbone networks, which would make delivery to remote users impossible. Clearly some sort of compression is required to bring the stream down to a manageable size.

The compression/decompression (CODEC) scheme has been perhaps one of the thorniest selection issues we have faced. There are many different schemes available: H.261, MPEG1, MPEG2, MPEG4, and several that are proprietary. Each of these has been designed for a particular scenario. H.261 was designed for low bit rates--64 kilobits per second (kbps) to 2 mbps--delivered through a public switched digital network such as ISDN and is used by most of the MBone tools. On the other hand MPEG1 and 2 were designed to deliver higher quality video at a higher bit rate. MPEG1 delivers a good to very good picture quality at frame rates close to 30 fps in a range from 600 kbps to1.5 mbps. MPEG2 delivers very good to excellent picture quality at 30 fps in a range from 4 to 9 mbps. MPEG2 has one drawback in that it requires decoding hardware on the client machine at this time. MPEG4 is a recent addition to the MPEG family and is aimed towards lower bit rates. We have seen MPEG4 deliver 28.8 kbps low-motion streams quite successfully.

Finding the CODEC that best meets one�s needs is difficult at best. It is a process that boils down to a judgment call that is subjective in nature. As an example, typically an encoder would be configured to generate a bit stream at a given rate, number of frames per second, and picture size with possibly some direction as to whether an emphasis on picture crispness or motion should be given. These parameters, other than picture size, are only targets for the encoder to strive for. The actual overall performance of the encoder is dictated by the characteristics of the video source (that is, motion content, number of colors, complexity of the scene). The end result that would be evaluated is the number of frames per second delivered, motion fluidity, crispness of images, and the presence of compression artifacts (that is, pixelation or blocking).

Generally, CODECs are bundled into a client/server package by a vendor. This introduces a series of other issues that are critical in the selection process. For example, does the server support IP multicast, RTSP (real-time streaming protocol), SMIL (synchronized multimedia integration language), or multiplatform support for clients? What is the look and feel of the client? Is there support for existing video file formats (AVI, MPEG, JPEG, ASF, and so forth)? Is the client available as a browser plug-in and/or stand-alone application? Is there third-party support for tools such as catalogers and/or indexers?

We have looked at three vendor packages: RealNetwork�s RealVideo, Microsoft�s NetShow, and Cisco�s IP/TV. Our decision primarily boiled down to price. Both Real Video and IP/TV have a pricing structure that is based on a per-seat model, and NetShow is basically provided free of charge for now. We have been using NetShow, which is based on MPEG4, since we haven�t had any recurring funds for the project. Additionally, this technology has been in flux and we have seen from our evaluations that every couple of months either NetShow, Real Video, or some other package is doing a better job at the time. Luckily the NetShow package has worked very well for us. The picture quality is good for a large range of bit rates.

The Network

Our netcast events either live or die by the network. Having a thorough understanding of the network topology and link speeds is essential. In order to figure out whether a stream is going to reach its intended audience, you must know what limitations and bottlenecks lie between you and your constituents. One of the key advantages we have had in this project is that it originated in USF�s networking group whose members have an intimate understanding of the network. In addition, we have had no problem obtaining a data connection for any given lecture.

At the beginning of this project, USF�s data network was made up primarily of shared 10-mbps Ethernet local area networks with a few Token Ring segments. These LANs were tied together via an FDDI backbone, which supported primarily IP and IPX protocols. The regional campuses were all connected into the backbone network via 600 kbps wide-area links. There were approximately 300 28.8-kbps dialup access ports available for student and staff access.

Since then we have begun to upgrade our desktop connections to switched 10/100 mbps and have installed a gigabit switched backbone network. Our residence halls have been rewired, and switched 10/100 mbps service is available to each bed. Three of our regional campuses� wide-area links have been upgraded to OC-3 (155 mbps) ATM service. In January of 1999 the university connected to the vBNS (very-high-performance Backbone Network Service) through its membership in Internet2, providing high-speed connectivity to approximately 140 other universities. This allows us to provide high-bit-rate feeds (for example, Board of Regents meetings) to several of our sister institutions in the state of Florida. The dialup hardware has been upgraded to support the 56-kbps V.90 standard, and the ports now number in the 600+ range. Also in the spring of 1998 a 100-mbps connection was established with Time Warner�s Road Runner service to provide direct connectivity for USF students and staff via cable modem. Initial reports from users of this service show data download rates in the 3 mbps range. We are also working with GTE on its rollout of xDSL service in the Tampa Bay area and expect to be interfacing directly with them to provide a higher speed service to our students and staff.

There are essentially two methods in the IP networking realm to deliver a real-time multimedia stream: unicast or multicast. With unicast, when a potential listener requests delivery of a lecture, the server providing the stream must initiate a separate stream for that listener. Therefore, in the unicast case the bandwidth that a server must be capable of transmitting for a given lecture is the bit rate of the single stream times the number of listeners. In contrast, for a multicast stream when a potential listener requests delivery of a lecture, a join request is sent to the router which is providing connectivity for the given local area network and that router then duplicates the lecture�s stream from an upstream source. Therefore, in the multicast case the bit rate that the server must originate for a given lecture would be the bit rate at which the lecture is being delivered times one. Delivery via unicast tends not to scale and depending on the geographical location of the listeners could easily saturate data links. By delivering the streams via multicast, we are able to predict the worst-case network load that will be introduced onto our local area networks and wide-area links. IP multicast is an important feature to be enabled within the network infrastructure when delivering real-time multimedia streams.

Mobility

As the project matured, more and more equipment was required to deliver a netcast lecture. Lectures were housed in facilities ranging from our 2,200-seat Special Events Center to 30-seat classrooms, and we had to accommodate setup times ranging from several hours to 10 minutes. For every event we had several loose pieces of equipment, which we typically transported in the back of a golf cart. This led to forgotten equipment and fear of equipment damage during transportation. In addition to the transportation problems at each event, the cabling of the devices had to be reconstructed each time, which led to a certain amount of trouble shooting which had to be anticipated in the setup time. We reached a point where the thought of adding another piece of equipment to the package was out of the question.

To solve this problem we went shopping for cases that could accommodate 19" rack mountable equipment; that were fairly lightweight; and that had a front, back, and top that were easily removed. The photo shows the result of our search. To assure that the equipment was secure during transport, we had to make a few modifications in the packaging of our equipment. For example, we moved the computer that performed the encoding into a 19" rack mountable case and traded the computer�s 15" monitor in for a lightweight flat LCD display. By logically grouping equipment by function within the cases, we have been able to make a majority of the connections between the equipment on a more or less permanent basis. This has significantly reduced our setup and teardown times at events and improved reliability. The end result is an organized professional-looking package, which has helped to solidify the project�s viability.

Photo 1
USF's mobile netcast unit includes a video/audio encoder, an sudio mixer, audio gate limiters, a video switch, a scan converter, a VCR, a video camera, a wireless microphone receiver, a document camera, power conditioners, a 10/100 Ethernet hub, and microphones and stands.

The Desktop

Desktop support has not been a major issue throughout this project. The issues we have come up against in this area involve making sure the user�s desktop meets the minimum requirements. As an example, after we had delivered our very first lecture, which was an audio-only netcast, we received a call from a user who claimed that he had followed our instructions on the installation of the client software necessary to receive the netcast but was unable to hear a thing. Upon investigation into the dilemma we found that the desktop was a 486-based PC, which was the standard desktop machine at the time, which had no sound card. This was typical since sound cards at that point were an option that the university didn�t deem necessary. We were operating on the false assumption that most machines were equipped with sound devices when, in reality, the majority of those who might be interested in receiving the lecture were unable to. Fortunately, shortly thereafter new desktop purchases came with sound devices as standard, and most desktops began to be upgraded. As a result of this particular situation we published the minimum requirements for a desktop station to receive a netcast lecture.

When we moved to video, we forgot the lesson we had learned from audio. The evening we delivered our first video-based netcast lecture we were feeling very confident about the quality of the stream we were delivering. After some time into the event the coordinator for the lecture series came to us to inform us that he had been to his office to tune into the lecture and that the video quality was terrible! When we went to his office to see what was happening, we found a 486 PC that was having a hard time keeping up with the data traffic, let alone decoding and displaying the video content. We had been bitten again. With the introduction of video, the minimum machine required depends on the CODEC being used and bit rate of the netcast. The bigger the stream, the more powerful the machine. Since we have a limited number of staff and time to dedicate to this project, it has been difficult for us to provide one-on-one desktop support. Fortunately this hasn�t been a problem so far.

Planning for the Audience

One of the first things we do when planning for an event is try to identify the target audience. We do this to learn their probable geographic location, which will determine what constraints the network will impose on us for the delivery of the event. From this information we can thus determine what the maximum deliverable bit rate is. We have developed certain rules of thumb for determining the maximum bit rates for events.

Typically, if the event is in the evening, we have determined that the greatest interest in receiving the event is among those at home accessing it via dialup, so the target bit rate for those events is 28.8 kbps. This works well for podium speakers, not uncommon for these events. But when there is considerable motion involved, such as a dance, we will forgo making the event available via dialup and move the bit rate up to accommodate access to our cable modem subscribers at 300 kbps or higher in order to achieve good motion fluidity.

For events held during the day, such as a "Legislative Update" from our president, we know the interest will come from our main and regional campuses for which we have few constraints on the bit rates we can provide. Again, the amount of motion an event will generate is of primary concern when delivering an event at low bit rates. We have arrived at events expecting to deliver a low bit rate stream only to discover that the event will open with a dance performance or that the speaker has a tendency to sway back and forth during his presentation, causing choppy video output.

No matter at what bit rate we deliver the live event, we tape it and then re-encode it at different bit rates for playback.

Getting the Word Out

While we have been busy working the backend of the netcast events, there is an equally important aspect that must be addressed, that is, getting the word out about upcoming events. A great deal of effort has to be spent on promoting a project such as this; there is very little value in what we are doing if there is no audience.

We have developed, as one would expect, a Web site (http://www.netcast.usf.edu/) where one can get information on upcoming events and view past events. On a per-event basis, there are electronic mailings to USF listservs and postings to the usf.x newsgroups. The Symposia Series organization has promoted the events in its literature and through announcements at the beginning of the large events. And we have managed to get coverage in the campus student newspaper.

In the longer term, we would like to capitalize on the session description protocol (SDP) with tools similar to SDR, which was developed for the MBone. This could provide a central facility from which a user could discover upcoming events and gain access to current events independent of the originating organization.

Speaker Preparation

For the most part the speakers stand at the podium to deliver their message without any material other than their spoken word. But when a speaker arrives with additional material in the form of transparencies, computer-generated output, or notes to be scribbled on a chalkboard, many unexpected consequences emerge. We rarely get to communicate directly with the speaker prior to an event, as typically this communication is handled by the event organizers. This makes it very difficult for us to determine the speaker�s intentions and prepare for possible alternatives.

Many speakers just assume that there will be an overhead projector available and hastily make some notes on transparencies just prior to their lecture or even just wing it. This forces us to be ready for almost any scenario right up to the event. When a speaker arrives with a prepared presentation, most of the time the presentation material is not well suited for delivery via this medium--for example, poor color schemes for Power Point presentations, notes prepared in HTML for presentation via a Web browser, or poor writing skills for white boards. This is an issue that needs some serious attention if speakers are to make effective use of these media.

The organizers are generally very sensitive to any suggestions for modifying an audio-visual setup that they have arranged. For example, when we suggest substituting a document camera for an overhead projector, they become concerned that the speaker may be uncomfortable with the device even though it is very similar in appearance and functionality to an overhead projector. The organizers do not want to introduce anything that might interfere with a speaker�s ability to deliver his or her message.

Murphy�s Law Rules

Murphy is with us at every event. If you have ever had a paper with a deadline just moments away and you hit the print button only to have your word processor notify you that it has executed an illegal instruction, then you know what it is like just before nearly every event we have netcast. We have had the encoding machine crash 15 minutes before show time after being up and online for two hours. We have had our network connection begin to drop out erratically 10 minutes before show time. We have had audio power levels go up and down during show time after having worked out several quirks with the audio and even though providers had promised that the levels were fixed and would not vary. In spite of all the times Murphy was among us, we have failed to deliver an event only once.

Politics and Legalities

For any given event or activity in our Netcast Project some issues have arisen that we typically have assumed had been worked out by someone else only to find that they had not. They are generally legal issues involving permissions to distribute, content ownership, copyrights, and so forth. For example, we were presented with a collection of VHS tapes and were asked to digitize them for playback to students as course material. We inquired as to whether there were any copyright issues that needed to be resolved and were assured there were not. While we were in the process of digitizing these tapes, we heard from their rightful owner, who informed us that indeed there was a copyright issue.

For each event that we netcast, we must obtain consent from the speaker to distribute the lecture. The Symposia coordinator took the initiative to include language in the speaker�s contract that provided us with permission to distribute the lecture to the USF community via netcast. However, we have had a few celebrity speakers strike the clause from the contract. For non-Symposia events, we work with the event coordinators to obtain the necessary permission.

Typically before each event, we would begin playing our favorite music about an hour before the event to let people know that we were there and ready. After a couple of events we were asked if we were violating any copyright laws or licensing agreements by broadcasting this music. When we investigated, we found that the Office of Student Affairs pays an annual fee to the musicians union, which provides us with the right to distribute music to the USF community. Since the majority of agreements that we enter into stipulate that the content is for USF community consumption only, we have been forced to restrict access to USF IP address holders.

Where We Are Today

From our experiences throughout the Netcast Project we have built a configuration that allows us to capture and distribute the vast majority of content that has been provided. Figure 1 provides a logical layout of our current equipment configuration. There are still a few pieces of equipment that could be added to this package, such as portable lights or a video amplifier, but for the most part it is complete and we have been able to borrow equipment we need and do not own.

Figure 1: Logical Equipment Layout

Figure 1

Lessons Learned

Of all the problems we have encountered throughout this project, those stemming from video and audio production have worn on us the most. And when we have observed our television-broadcast unit during a mobile operation, it is easy to see them experiencing the same technical difficulties and stressful moments that we have. This is why whenever we have the opportunity to receive production services for an event, we take it. In contrast, the process of encoding and streaming has been relatively well behaved throughout the project. Starting this endeavor with audio-only presentation allowed us to grow with the technology.

We have received many inquiries from other institutions regarding the cost of putting together a unit similar to ours. Table 1 lists the current configuration that we have deployed along with approximate costs. Of all the items on the list, the scan converter is the one that most needs to be upgraded. In our endeavor to keep cost down we forwent the zoom and pan features that we have discovered are critical in many instances. After sharing this information with other institutions, we discovered that a few of them have spent two to three times the amount that we have on turnkey packages in preparation for a large event, and in spite of the prices they paid, their events were plagued with several technical difficulties. The old adage that "there is no substitute for experience" is particularly relevant to netcasting.

Table 1: Equipment List and Cost
Rackmount Encoding Computer (Duel PII 300) with Sound $6350
Video Capture Board $100
Video & Web Server (Duel PII 300) with Server Ethernet $7000
Flat Screen LCD Monitor $1500
Rackmount Keyboard & Shelf $160
Rackmount Audio Cases (2) with Drawers & Shelves $1400
Audio Mixer with 4 Outputs $730
Headphones $100
Rackmount Power Conditioner (2) $140
10/100 Hub with Fibre Input $1000
Miscellaneous Cables & Connectors $200
NT Server License (2) $200
MS Netshow Server Free
MS Netshow Clients Free
Wireless Microphone System $410
Duel Channel Gate/Limiter/Compressor (2) $460
Professional Speaker Microphones (2) $310
Choral/Shotgun Microphones (2) $460
Microphone Stands/Booms/Tabletops $170
Audio Snake 150' 8 Channel with Splitter $400
Digital Video Camera with Extender & Tapes $4650
Document Camera $2600
Tripod with Video Head $350
Digital Video Mixer $1600
Scan Converter $1300
VCR $150
Total $31,740

 

Conclusions

We have watched streaming technology evolve and mature throughout this project and now can reliably deliver audio and video streams over a wide range of network media types. What began as a proof-of-concept project has resulted in a viable mechanism for the delivery of USF�s Symposia Series, guest lectures, and other university events to the desktops of students, faculty, and staff.

Sidebar

Glossary of Terms

ATM � Asynchronous Transfer Mode. International standard for cell relay of multiple service types (such as voice, video, or data). Typically transported over DS-3 or SONET.

CODEC � Coder-Decoder. A device that typically uses pulse code modulation to transform analog signals into a digital bit stream and digital signals back into analog.

FDDI � Fiber Distributed Data Interface. A local area network standard, specifying a 100-mbps token-passing network using fiber-optic cable. FDDI uses a dual-ring architecture to provide redundancy.

Gigabit � 1,073,741,824 bits per second.

IP � Internet Protocol. Network layer protocol in the TCP/IP stack offering a connectionless internetwork service.

IP Multicast � Routing technique that allows IP traffic to be propagated from one source to a number of destinations or from many sources to many destinations. Rather than sending one packet to each destination, one packet is sent and duplicated by the network and sent to a specific subset of addresses.

IPX � Internetwork Packet Exchange. Novell NetWare network layer protocol used for transferring data from servers to workstations.

ISDN � Integrated Services Digital Network. A set of communications standards allowing one multipurpose communication network to carry voice, data, and video.

MBone � Multicast backbone. MBone is a virtual IP multicast network composed of multicast local area networks and the point-to-point tunnels that interconnect them.

OC-3 � Optical Carrier. The physical foundation for SONET optical signal transmissions. OC signal levels put STS frames onto a fiber-optic line at a variety of speeds. The base rate is 51.84 mbps (OC-1); each signal level thereafter operates at a speed divisible by that number (thus OC-3 runs at 155.52 mbps).

Pixelation � The appearance of pixels in a bitmapped image.

SDP � Session Description Protocol. SDP is intended for describing multimedia sessions for the purposes of session announcement, session invitation, and other forms of multimedia session initiation.

SDR � SDR is a session directory tool designed to allow the advertisement and joining of multicast conferences.

Streaming Technology � The real-time transmission of multimedia information.

Token Ring � A token-passing local area network developed and supported by IBM. Token Ring runs at 4 or 16 mbps over a ring topology.

xDSL � Digital Subscriber Line. Public network technology that delivers high bandwidth over conventional copper wiring at limited distances. There are four types of DSL: ADSL, HDSL, SDSL, and VDSL.

XLR � Three-pin connector for audio devices. Unicast � Communication between a single sender and a single receiver over a network (in contrast to multicast).

Endnotes

1 See "Audio for Distance Learning," Shure Applications Group, for a discussion of such audio concerns; it is online at http://www.shure.com/booklets/distancelearn.html.

Back to the text

2 See "Digital Video for the Next Millennium," Video Development Initiative (ViDe), which provides a high level description of several digital video formats; it is online at http://sunsite.utk.edu/video/.

Back to the text

 

Ted Netterfield ([email protected]) is associate director, Academic Computing and Information Technologies, at the University of South Florida.

...to the table of contents