The digital video industry seems to be gradually succumbing to the allure of the 50% bandwidth savings promised by HEVC, aka h.265. What if those savings can be achieved with the existing AVC, h.264, technology? Cinova thinks its post-processing product, Crunch, can do just that.
The benefits the video industry can realize from a reduction of 50% in the amount of bandwidth necessary to deliver video are enormous. From saving money in bandwidth charges to delivering higher quality video on mobile networks, there are few places in the chain of delivery that don’t benefit in some way.
However, the cost and time required to move to a new codec, like HEVC, are similarly enormous. Devices such as televisions, set-top boxes and smartphones need to be replaced, video encoders upgraded, not to mention all the video that needs to be re-encoded in the new format. The change from MPEG2 encoding to AVC based MPEG4 took the better part of a decade. Likely, HEVC adoption will take a similar length of time.
Sunil Sanghavi, COO of Cinova, believes there is a lot more efficiency to be wrung out of AVC, and that it will get us to 50% savings right now.
While online video providers will be able to pioneer 4K viewing for the public using bit rates as low as 10Mbps (for movies over broadband), the introduction of true Ultra High Definition (UHD) television on broadcast networks will start at around 15Mbps, while we could see bit rates as high as 30Mbps initially, using the best current available HEVC/H.265 compression.
Although consumer marketing is confusing the two, there is a big difference between 4K television, which delivers four times the pixels of HDTV, and what is being talked about as true UHD, which quadruples the pixel count but also supports at least 50/60 frames per second and at least 10 bit colour depth.
Imaging technology Beamr demonstrated a video optimisation solution at IBC 2013 that can potentially reduce bit-rates by up to 40% for streamed OTT video.
Bit-rates from physical media such as BluRay discs could be reduced by up to 75%, the company also claimed.
Beamr’s CTO, Dror Gill, emphasised that Beamr Video is not a new type of video compression codec: instead it controls existing video compression systems like H.264 or HEVC, manipulating the encoding process in such a way that, in effect, it lowers the threshold at which bit-rate reductions cause artefacts visible to the naked eye.
Microsoft announces the general availability (GA) release of Windows Azure Media Services. This release is now live in production, supported by a new media services dev center, backed by an enterprise SLA, and is ready to be used for all media projects.
Cloud streaming delivery models such as Microsoft's Windows Media Azure could change the landscape for dedicated encoders.
In our recent coverage of the National Association of Broadcasters' show (NAB), we mentioned Windows Azure Media Services as one of several emerging cloud models for scaling up streaming delivery to television-level viewership. The benefit of cloud-based delivery holds potential, yet one part of Azure Media Services was under-reported: live streaming.
Why does live streaming in Azure Media Services demand more attention? We think the inclusion of at least one live encoder option is a precursor to a larger trend that will occur within the live encoding space: a move towards one-off rentals of live encoding and away from buying dedicated live encoding resources.
Depending upon your encoding tool, you may have access to a checkbox or number box that controls something called IDR frames. What are these creatures and what is their significance? More imporantantly, what's the optimal setting? Well, let's just say that if you're seeing anything like the random blockiness in the picture below when you drag the playhead back and forth in the video window, you're probably using the wrong value.
How can companies that create video content make sure it remains salable for many years to come? By future-proofing it. Read on to learn how to keep your content fresh and your files accessible.
Creating content is a major investment for any company, no matter what business you’re in. The investment is only paid back by selling the content to distributors and consumers. The more times and ways in which this content can be sold — audio, video, multi-media — the better the return on investment (ROI). And this ROI can be improved if these sales can be carried well into the future, not just for a short window of 2 or 3 months.
Logically, the longer content is attractive, relevant, and accessible to potential buyers, the more money it can make. For content producers, this leads to a critically important question: How can they future-proof their content so that it remains accessible and salable for years to come?
Adaptive streaming technologies like Adobe’s Dynamic Streaming, Microsoft’s Smooth Streaming, and Apple’s HTTP Live Streaming, use multiple encoded files to deliver the optimal viewing experience to video consumers watching on a range of devices, from mobile phone to workstation, via a range of connections, from FIOS to cellular. Though there are differences in implementation, all adaptive technologies switch streams based upon heuristics like CPU utilization or buffer size. That is, if the player detects that buffer levels are too low, it may choose a lower data rate stream to avoid running out of data. If CPU utilization gets too high and frames start dropping, it may request a lower resolution file that’s easier to decode.
While most of the technology that enables stream switching is lodged in the player or streaming server, there’s lots to do on the encoding side to produce streams that switch smoothly. In this article, I’ll outline the key differences between producing for single stream delivery and producing for adaptive streaming.
At the NAB show, Yves Faroudja and his new startup showed off new technology designed to provide up to 50 percent reduction in video bit rates without reduction in image quality.
The Faroudja's scheme doesn't alter current compression standards (MPEG2, MPEG 4, HEVC). It's rooted in Faroudja's belief that such compression systems aren't using all the available redundancies to improve compression efficiency.
Under the new scheme, Faroudja introduces a new pre-processor (prior to compression) and post-processor (after compression decoding). "We take an image and simplify it; and that simplified image goes through the regular [standards-based] compression process," he explained. "But we never throw away information."
Instead, in parallel with the conventional compression path, Faroudja inserts what he calls a "support layer." This compresses signals not used in Faroudja's so-called simplified image. Together with the decompressed simplified image, the support layer helps reconstruct the original image in full resolution -- at a reduced bit rate.
Faroudja claims "a bit rate reduction of 35% to 50% for an equivalent image quality [and] a significant improvement of the image quality on low bit rate content."
Opus is a state-of-the-art royalty-free lossy audio codec convering more applications than any other single audio codec— from low latency VoIP to high fidelity music storage. After five years of open development, including contributions from Xiph.Org, Skype/Microsoft, Mozilla, Broadcom, and many individual developers, Opus was standardized in 2012 by the IETF in RFC 6716 and has since been deployed to hundreds of millions of computers and devices.
Daala is a new open effort to build a state-of-the-art video codec targeting compression performance beyond HEVC and VP9. Leveraging the experience we had with Opus we are building a new technical framework for video coding the ground up to avoid patent thickets and be royalty free: By breaking from the common design pattern of block based transform codecs we avoid many licensing complications and create an opportunity to better resolve some of the weaknesses of existing formats.
Dynamic Adaptive Streaming (DASH) is a technology that has been implemented and deployed although the scientific literature was inexistent. Simply put, the server offers several representations of the same video ; clients can choose the representation that best fit their capacities. Since 2008, many researchers have deciphered the global behavior of client-based adaptive mechanisms. However, one key piece of the theoretical cake is still missing: what is the optimal set of video representations the server should offer?
NHK's 8K Super Hi-Vision is an extremely bandwidth-heavy format -- so much so that earlier tests used gigabit-class internet links rather than traditional TV broadcasting methods. Thankfully, both the broadcaster and Mitsubishi have developed an encoder that could keep data rates down to Earth. The unassuming metal box (above) is the first to squeeze 8K video into the extra-dense H.265(HEVC) format, cutting the bandwidth usage in half versus H.264. Its parallel processing is quick enough to encode video in real time, too, which should please NHK and other networks producing live TV. We'll still need faster-than-usual connections (and gigantic TVs) to make 8K an everyday reality, but that goal should now be more realistic.
When it comes to video encoding, the choice between hardware and software comes down to flexibility, latency, and cost.
One of the hardest choices encoding technicians have to make is deciding between hardware and software. Hardware-based encoders and transcoders have had a performance advantage over software since computers were invented. That's because dedicated, limited-purpose processors are designed to run a specific algorithm, while the general-purpose processor that runs encoding software is designed to handle several functions. It's the specialist versus the jack-of-all-trades.
In the past few years, processors and workflows have changed. The great disruptor has been time and the economics of Moore's Law, which famously says that the number of transistors incorporated in a chip will approximately double every 24 months. The logical outcome of Moore's law is that the CPUs get more powerful by a factor of two every few years, but more recently processing power seems to double every few months. Lately, Intel -- whose co-founder Gordon Moore coined Moore's Law -- has been adding specialty functions along with its math co-processors to equalize the differences between general-use processors and specialty processors.
There are many layers and elements to both a general-purpose processor and a task-specific hardware processor. The general-purpose CPU is the most common -- there are literally billions of them in all manner of computing devices -- while the more purpose-oriented processors include digital signal processors (DSPs), field-programmable gate arrays (FPGAs), and integrated circuits (ICs) that are available for various industrial appliances and widely used in cellphones. Many of the structures and elements are similar across all types, but there are considerable differences. If you are not familiar with the elements of the various types, here are the basic structures of both.
Palo Alto-based video encoding start-up EyeIOleft stealth mode Wednesday with the announcement that it has licensed its technology to one of the biggest players in the online video space. Netflix is using eyeIO’s encoding technology to cut down on the bandwidth of its streams, allowing the company to deliver HD video without busting subscribers’ bandwidth caps or overwhelming networks in emerging markets.
EyeIO has been operating stealthily since the end of 2010, and was able to win Netflix as a customer last summer. Netflix hasn’t said where and in which capacity it is exactly using the technology it has been licensing from eyeIO, but the company’s VP of Product Development Greg Peters said in a press release that eyeIO is “an important part of the technology (Netflix uses) to improve video quality and overcome bandwidth challenges presented by Internet infrastructure.”
Raystream Inc. announced that a free trial of its HD video compression service will be available to any business offering HD video content online beginning Friday, December 16, 2011.
Raystream's proprietary video compression technology drastically decreases the file size of HD videos -- up to 90 percent, with an average of approximately 70 percent -- with no loss in the quality or crystal clarity for which HD video is known.
-- For example, "Raystream", a recent (and horrifically done) scam company advertised their "amazing proprietary encoding technology", which was of course just x264 on default settings with no modifications. --
To get content containing either thought or leadership enter:
To get content containing both thought and leadership enter:
To get content containing the expression thought leadership enter:
You can enter several keywords and you can refine them whenever you want. Our suggestion engine uses more signals but entering a few keywords here will rapidly give you great content to curate.
HEVC far from mainstream.