STOP 2110!





I thought that SMPTE 2022-6 was a joke, but then came SMPTE 2110. For anyone wanting to do virtualization, cloud or standard IT-based implementations this is a slap in the face for any software engineer. Old school hardware engineering combined with design by committee at its best. This can't be salvaged.


Let's add some perspective and have a look at the state of the industry right now and try to make sense of the many acronyms thrown around. There are a number of seemingly competing standards. Let's look at the major contenders.

SMPTE 2022-2

● ‘Simple DVB‘ / ‘MPEG2 TS’ 
● Understood by many, supported today 
● Low rate 
● Complex to manage data events 
● Codec-dependent Latency
● Many software implementations 
● Does work in the cloud just fine

SMPTE 2022-6

● ‘SDI over IP’ 
● All blanking ‘kludges’ carry over 
● Requires high perf NICs and switches
● Strict jitter / latency expectations
● Supported and working now
● No software implementation 
● Does not work in the cloud

SMPTE 2110 / AES-67

● ‘Uncompressed elementary streams’ 
● Large data rate (>1Gbps for HD) 
● ANC data transcription (CC, AFD) 
● Requires high-perf NICs and switches 
● Not self-described or discoverable 
● Poor scaling to UHD  ( >8Gbps !)
● Needs NMOS to do anything useful
● No software implementation
● Does not work in the cloud


● Metadata layering for 2110 / AES 67 
● Discovery for 2022-6 / 2110 / AES 67 
● RESTful API for queries and registration 
● Requires mDNS / Bonjour 
● No security (at all, by design) 
● Metadata layer standard is controversial


● No technical standard or work
● Loads of stickers and lapel pins
● Propaganda trade org to sell 2110


● Seemingly dead
● Like 2110 / AES67, but uses MPEG2 TS
● Self-describing, elementary 
● Gains benefits of MPEG2 TS experience
● Large data rate (1Gbps for HD) 
● Requires high perf NICs and switches 
● All ANC signals transcribed (CC, AFD, etc.)


● Low-latency, compressed streams
● Self-describing, simple discovery 
● Modest data rate (100Mbps for HD)
● Designed for software use
● Ready to go, free SDK 
● Proprietary, black box
● No low-level API or QoS control
● Enterprise scalability unproven 


● TICO now also with misleading "IP" label 
● SDI in-line 4:1 compression or
● 2022-6 / 2110 4:1 in-line compression
● No TICO IP stream exists as such
● Adds "compression" to SDI/2202-6/2110 
● All other problems remain the same
● Proprietary


● Seemingly dead
● Sony seems to be AIMS / 2110 now
● Hardware only
● Compressed - multiple UHD in 10GbE
● Proprietary, black box

Studying the tables above, it becomes clear that not one single Video over IP standard does it all. But some are worse than others, especially when your vision is to have a legacy free software-only based environment, where the limit of what you can do with video, is only defined by the available computing power, as well as networking and storage bandwidth and resources - either local, virtualized or in the cloud.


SMPTE 2110 with a limited usage scenario e.g. inside a studio would be fine, although where the benefits over SDI would be is hard to see. This is a general problem for most broadcast video over IP solutions -  as there are virtually no cameras that support the respective IP standards, SDI camera outputs still needs to be converted to IP and back to SDI again e.g. to connect a monitor, vision mixer or anything else - and that all over the place. The vendors selling converters or "gateways" stand to make the most money.
All this could be chalked of as birthing pains, but now a large part if the industry is telling broadcasters that SMPTE 2110 is the one solution that can do it all and that you better join the bandwagon if you do not want to be left behind.

There are many good reasons why everyone should sit this one out, even one has money to burn and feels the urge with UHD and 8K one needs to do infrastructure investments now. 

Don't - at least not on SMPTE 2110.


SMPTE 2110 in action!

Where to start? Maybe by saying it is half-baked, unfinished, was pushed out of the door before all problems were addressed that now need to be fixed by NMOS side car projects? No, that would assume it could be fixed eventually.

The whole approach to "solving" the problem of video over IP shows where the actual problem lies - the type of people that were involved solving it. 

If you ask broadcast hardware engineers to solve video over IP then you get SMPTE 2022-6, which is SDI in its entirety wrapped into an IP stream. To create this in software you need to emulate a complete SDI video card and all the legacy "features" of the SDI standard as well. Of course this is uncompressed and at least 1.5Gbps for a simple 1080i video stream.

After the backlash received for creating SMPTE 2022-6, which was essentially still SDI, just swapping the transport medium - continuous serial stream to packetized Ethernet, SMPTE 2110 now is almost the other extreme. Now everything is completely unpacked and stuffed into separate, undescribed elementary RTP streams - creating a whole range of new problems  - and of course it is also uncompressed.

Why uncompressed? There theoretically may be a case for uncompressed. Theoretically. This stupid discussion resurfaces with each and every technology change. Bring back LaserDisc! No (affordable) tape format since analog Betacam was uncompressed. Yes, D1 and D6 were, but apart from some broadcast labs no one could afford them and no one used them in production. DigiBeta, HDCAM / HDCAM SR etc. are all using compression and still no one ever complained or said the quality was not good enough for high-end post. In cinema production most cinematic "RAW" formats use compression and many prefer shooting in high-quality ProRes or DNxHR directly to save space and production time. Many statements about uncompressed are myths kept alive by propagating lies, like saying that uncompressed has lower latency or is needed to be able to do a proper chroma key. But even the question of uncompressed or not, is not the real problem, it is just creating unnecessarily large data, which severely limits the usage scenarios where SMPTE 2110 can be used.  


The biggest kicker about SMPTE 2110 is the fact, that with its latency and jitter requirements, it is almost impossible to implement in a pure software stack. One that could run for instance in a popular virtualization environment like VMWARE or Hyper-V. For now the only way to get it to work on a standard IT server requires a physical card from vendors such as AJA or Deltacast, if these were already released yet, and working. In a VM environment these cards would have to be passed-thru directly - one card per VM. This offers little benefit over SDI, which for now is also considerably cheaper. Maybe one day making a pure software based SMPTE 2110 stack is possible, but only with enterprise 10GbE or 40 GbE NICs and some well written drivers. A real-time OS would also be helpful - which would again exclude Windows, MacOS, standard Linux, and virtualization/cloud.

SMPTE 2110 on its own does not really work. Its wonderful elementary nature splits it into video (2110-20), audio (2110-30) and ANC data (2110-40) and a separate time "stream" (2110-10 / SMPTE 2059) basically using PTP driven network timing. None of this tells you what you are dealing with. Without the Session Description Protocol providing a "text file" with the definition of what is really in those streams, they are meaningless raw data. The SDP and how this is controlled is not a part of the scope of SMPTE 2110.  This is where 2110 needs to enlist the help of AMWA / BBC R&D's NMOS which is the magic bullet, which will fix this all. Or not. As matter of fact getting different vendors devices to talk to each other is a bit of an adventure in real life. Different vendors have extended the protocol to their liking or interpret SDP differently. Then there is still the issue of security and large scale discovery. Or the question of who calls the shots should you ever entertain the crazy idea to connect two SMPTE 2110 equipped OB trucks to each other. The only solution? Go SDI or 2022-6!


It comes at little surprise that even large broadcasters now start dabbling with Newtek's NDI, which has its own set of problems and limitations, but also a fair amount of advantages. It has a lot of plus points; it is an easily deployable software-only stack that runs on older hardware with non-enterprise NICs just fine, it unashamedly uses compression and therefore can do multiple HD streams over simple GbE, it works in VMs and has a simple, free SDK and sample apps to go with it - all this makes it gain more and more followers every day. NDI is a proprietary blackbox that is very simply configured because there is little or nothing to configure. For broadcast engineers the lack of low-level control and QoS monitoring is a nightmare as is the completely opaque nature of the codec used inside. Enterprise wide discovery and scalability are still work in progress, although it is progressing. The advantage of NDI's proprietary nature is that Newtek can drive its development at much faster pace than the "competition" stuck in SMPTE or VSF committees having to agree on the lowest common denominator.


No one for now. Either Newtek opens up NDI for real or a new 3rd option comes along. The SMPTE 2110 folks talk about adding compression to the standard, but that is likely to be just something like TICO, which really does not help anyone.


Maybe the answer already exists - why not just use SMPTE 2022-2 but with much higher bit rates or other higher quality codecs. Creating a Transport Stream with AVC-Intra 100 or AVC-Ultra inside. This would be self describing, low latency, can handle all the ANC data and would work with most existing tools out there already. VLC can even already play these streams out of the box.