For those readers who are new to the concept of AV−over−IP signal transmission, we’ve got some good news…and some not−so−good news.
The good news− You can stop obsessing over the numerous protocols associated with digital signal transport and display connections, like HDCP, HDMI, DP, DDC, and DVI. Also versions of these acronyms, like 1.3, 1.4, 2.0, 2.2, etc. (They’re still important, but you don’t have to worry about them for now.)
The no−so−good news− You have a new set of acronyms to learn, like UDP, RTSP, and IGMP. (Hey, acronyms are an essential part of our industry, right−)
What’s different about these acronyms is that the vast majority of them define protocols. You know – those sets of rules that diplomats use when visiting other countries so that they know when and how to shake hands, how to address heads of state, which fork to use (and maybe not use a fork at all), and who pours the drinks for who and when. (The latter is very helpful to know in Japan.)
“AV over IP” means audiovisual signals over Internet Protocol – or, more specifically, AV signals traveling over a network that uses Internet Protocol. Usually IP is accompanied by Transport Control Protocol (TCP), and the two protocols define everything from Internet addresses to the size of packets and the headers that accompany Internet messages (files, documents, photos, and emails).
Postal codes, like “zip” codes in the United States, follow a specific protocol. They help the post office zero in on a specific address. Telephone numbers also follow a protocol in the way they are structured with area codes and country codes. In both cases, postal and voice traffic get where they’re supposed to go because everyone in the world recognizes and follows these protocols.
You may ask, why do we need additional protocols for AV traffic when we send it over an IT network− Good question, and here’s an even better answer. Internet traffic is random by nature. The messages we send may be structured into multiple packets, but not all of those packets arrive in order at the same time.
Some packets may travel via different servers to get to their final destination, which means the order of your message is inevitably broken up. But you don’t care, because the different parts of your message eventually all arrive in one place and are put back in order, thanks to the headers and packet order checking built−in to TCP. And you can read that email, open that photo, and play that MP3 file just as the sender intended.
But this “as long as it gets there eventually” approach to sending traffic over Internet connections won’t work with AV files. Video and audio packets must arrive in a specific order for them to make any sense at the receiving end – otherwise, what you’d see and hear would be a mishmash of picture elements and sound.
Not only that, it is assumed that some packets will be dropped when transmitted through a network, which is why things like checksums and packet counts are used – if packets are missing, the server requests they be sent again until the message is complete. This is annoying with normal email traffic, but intolerable with AV traffic.
So we have different protocols for sending video and audio to ensure they stream smoothly. And we add error correction and buffering to make sure you don’t see and hear anything until the packets are back in order. Buffering and error correction means there is a delay between when the AV is streamed and when we watch it, known as latency.
One very common protocol is User Datagram Protocol (UDP), which is a default setting for many video encoders. UDP is widely used and recognized, but AV traffic using it can have trouble getting past firewalls. So TCP is also included with UDP to make the job easier.
UDP requires accurate timing at both ends of the network connection. More sophisticated protocols that assure real−time transmission include Real Time Transport Protocol (RTTP) and its more commonly−used siblings, Real Time Streaming Protocol (RTSP) and Real Time Messaging Protocol (RTMP). RTSP is far more popular than RTMP, which was originally developed by Macromedia to stream video and audio as Flash files.
Another way to stream AV files is to use HTTP Live Streaming (HLS). HLS is commonly found in the world of Apple products and breaks up the video/audio stream into much smaller pieces that then play out as HTTP page requests. (Sorry! HTTP stands for Hyper Text Transfer Protocol, and when you scroll over a highlighted link to open a Web page, you are using HTTP.)
And there’s another way to stream AV using HTTP requests. This one is more widely accepted and runs on any platform. It’s called Dynamic Adaptive Streaming over HTTP (DASH – now that’s a cool acronym!), and it works by dynamically changing the file size and bit rate in response to network conditions. Some program segments may be large and others may be smaller, depending on network speeds.
The MPEG compression system (Motion Picture Experts Group) can run hand−in−hand with DASH, combining two acronyms along the way and probably giving you a headache! HLS, DASH, UDP, and RTSP are all recognized by media devices with IP connections and MPEG decoders (JPEG and JPEG2000 files can also travel with TCP/IP headers).
Other acronyms you’ll run across as you dip your toes in the water of AV−over−IP include ABR (Adaptive Bit Rate streaming), DSS (Dynamic Stream Shaping), and MBR (Multi Bit Rate streaming). Wow, isn’t this fun− And you thought the AV industry was overloaded with acronyms.
We’ll go into more detail about these terms in an upcoming post, but I sense right now that you have had enough of acronyms for the day and are either in a FOG or developing a HEADACHE from AOL (Acronym Over Load).
Class dismissed! (Don’t worry, there won’t be a TEST or POP QUIZ on this…)