CES 2011: Tablets

We continued to probe the question “Why have tablets achieved such market velocity in a short period of time?”

Our Assessment

Tablets have achieved a critical mass in less than a year for 3 reasons:

  1. A successful market example in the Apple iPad
  2. An opportunity OS, in Google Android, which is free and has established credibility in the smartphone market
  3. A huge ecosystem in Asia for the design and production of low cost computer-based products both quickly and at low margins

One is left wondering if this not just another example of an embedded processor product which has a screen on it. If so, why are tablets so hot in 2011?

This is where Apple is the defining company. It has single-handedly shown that this product space is viable by the market it has created. The reasons are well known but the combination of user experience, form factor, pricing and applications all resonated with consumers in a way that this market space has never done before. Apple’s success has created a target.

Google’s ability to challenge Apple, with a totally different business model, has both established a viable alternative but for the ODMs and OEMs making tablets, and made a critical component a commodity and free – the OS.

There is another critical factor – the electronics infrastructure in Asia. We watched in the early years from 2000 to 2005 how Taiwan struggled to make mouse products which were first competitive and then at the leading edge. First was laser based mouse products based on HP chips. Over a span of 3 – 5 years the capabilities of many companies rose to include quality engineering and manufacturing and thus credible products. In short, they developed the necessary core competence to compete with the PC OEMs, Microsoft and Logitech. This has now included Chinese companies. It is this competence and hungry companies ready to leap on viable market opportunities, which is descending on the tablet. They will drive costs out of every corner of the product and accept margins which US companies will walk away from.

For the Asia electronics ecosystem, the major risk factors have been removed which would normally block market entry. That is: there is a demonstrated market and the complex and difficult software engine is available virtually for free.

There is another component, tangentially related to 3D. A major cost component of the tablet is the display. The TFT LCD display industry is getting increasingly anxious about its market after HD penetration begins to saturate. Where is the next market? At one time this was seen as display signage but this appears less likely to take up the capacity used for televisions. A consumer product is needed to consume massive amounts of display area. The tablet is one, IF an individual or family buys multiple tablets. This is a price issue and one that the Asian electronics ecosystem is well prepared to tackle.

Given that a bow wave of tablets is about ready to happen what does the future look like? A panel on Consumer 360 provided an excellent backdrop. Here are some highlights.

  • The tablet is the 4th consumer screen. This is likely the last screen. 2011 is the year of the tablet.
  • Another view is that we could see a near term divergence on the screen issue with the number of consumer screens rising to 8 or 9 and then falling. A convergence point could be as low as 2 with 3 – 5 likely. A point agreed to by many panel members is that the central screen will be the smartphone and not the PC or television. This is the personal device which will take on increasingly important roles for the individual. The smartphone is one’s control device and hub.
  • There is a larger context for what is taking place. What is happening is the disintermediation of the traditional franchises around the consumer. That is, the TV is no longer just in the living room. A DVR can exist in the car. The smartphone is my computer. This disintermediation also presents the major future opportunity. There is the need for services which integrate between devices and screens for the individual.
  • These services are the next major step forward. With the smartphone as my control point there is a need for it to adapt to me and my personal needs and life patterns. These needs will transition between screens and locations based on what I need or may need. The phone anticipates me. The boundary between the phone and cloud is immaterial. The smartphone is my personal assistant. One panel member called these Personal Centered Devices (PCD).
  • When the smartphone plays this role, privacy and identity is critical. These have not been well addressed. It is clear, today, that individuals are willing to trade off privacy for the value the services bring to them. The phenomenon of “check in” has come to be well accepted by many consumers. But clearly the phone has not advanced to the level it will likely need to arise to in this personal control device environment.
  • In this evolution of screens, personal control devices and identity, the end-game is seen as the automated home. Long sought and elusive, the home is a critical component of this blended future. How and when it happens is an open issue.
  • It is clear that no company is focused on creating these seamless personal experiences on personal devices. The market is just too early. But this is the future.

Conclusions

Now we can see the interplay and role of the Asian electronics infrastructure and the future of these connected screens and devices.

Tablets are one screen of the many but a very important one. It fills the gap in display real estate. The exact role it plays will vary by individual and location. There will be no single concept for the tablet. The notion of a consumptive device or a productive device is too narrow a classification. They all will exist.

A key role which the tablet will play, which the smartphone will not, is as a ubiquitous device. As prices decline this enables the ownership of many devices. They become like sheets of paper scattered about and with specific functionality.

To achieve this environment means continual declines in pricing to the point of being disposable. The Asian ecosystem is well suited to such a challenge.

Just as important, there must be a larger context for the role all the screens play. This means the integration of the use environment, the relationships between devices, personalization and identity. This is the huge challenge. We are only at the beginning.

Posted in Analysis, Events, Markets, News, Technology | Tagged , , , , , , , , , , | Comments Off on CES 2011: Tablets

CES 2011: 3D Assessment

There are two types of 3D on the floor: with glasses and autostereoscopic (no glasses). Sharp and Toshiba had autostereoscopic. We did an informal assessment of the 3D by walking up to 16 displays and visually evaluating the 3D and overall display image quality. I have a background in holography research and have seen many 3D displays using coherent light. This is high quality 3D which needs no glasses. Also did research in Integral Photography which is incoherent light 3D and can be made similar to holography.

The Rating Exercise

High quality 3D creates a visual image where the screen becomes immaterial to the image space created. This was only seen on one display – a Samsung 75” LED display where the fish appeared in front of the screen. We rated this an 8 and it was the only display to achieve this level.

The quality of the 3D is heavily influenced by the content. The farther away the objects are the lower the 3D impression due simply to the intraocular distance of the eyes. As objects are imaged, with the 3D camera, the closer they are to the camera the more realistic the 3D. This is one reason that animated movies can be so compelling: the imaging geometry can be carefully controlled by the creator. Close up sports is another. The impression of the quality of 3D can vary from image sequence to sequence. From an observer’s standpoint the more the content has marginal 3D qualities the less interest there is in wearing the glasses.

Generally a large screen is better for 3D. This creates a wider field of view and can make the screen size large compared to the intraocular distance. The largest display was the Samsung 3D Arena with 50 screens. We found it distracting and difficult to get immersed and rated it a 3.

One display, described as 3D from a PC, was in the Toshiba booth on a large screen. The combination of poor image quality and distracting 3D netted only a 2. We could not see how anyone would like to spend any time in front of this display.

Results

The ratings ranged from 2 to 8 with an average of 4.7. As a result we have doubts this market will take off. Our reasoning includes the following:

3D content is special and we expect that this will remain the same for the foreseeable future. For example, will the local news currently seen in HD go to 3D? – We doubt it. Thus, 3D will remain a special viewing experience not mainstream.

Glasses are not something which many consumers will like. Just look at the penetration of contact lenses – many do not want to wear glasses. This might be acceptable for the occasional movie but unlikely as a routine matter in the living room.

Buying an expensive 3D large screen television, after making a recent investment in a HD television, is something we see only early adopters doing. The average TV set has a lifetime of 7 years and to replace the relatively new one with another one just to see 3D is unlikely.

Based on this informal assessment, we doubt that consumers will find the visual differential from 2D to 3D anywhere as great as the differential from SD to HD. Further, HD content has become quite pervasive, both over the air and acquired, but this is not the case for 3D.

The motivation of the display panel industry and CE companies with 3D is to sustain the ASP of displays when the market direction of HD display cost is just the opposite.  Consumers will have to agree with this to create a significant market in 3D displays.

Conclusion

This leads us to the simple conclusion that 3D, at present, is not a compelling visual medium to drive consumers to make a large market for the flat panel and consumer electronics industry.

Posted in Analysis, Events, Markets, News, Technology | Tagged , , , , , | Comments Off on CES 2011: 3D Assessment

CES 2011: Day 2

CES exhibition floor

CES Exhibition Floor

As we explore the CES floor the energy around the tablet is palpable. The smartphone is assumed. Despite all the uncertainties of the market and the struggles for market share this looks like an old battlefield. The excitement is around the tablet which had its last energy peak in 1991 with Pen computing. Yet, in only 9 months the market has been transformed by the iPad. This was only speculation at CES 2010. Here at CES 2011, CEA estimated before the event that there would be 80 tablets introduced and yesterday they hinted at 100. During a panel discussion on tablets a thread ran about the struggles that tablet manufacturers are having with differentiation. There is only one way to characterize this – high market velocity. This was captured with one company showing a case for the iPad 2.0. Here at CES 2011 as we walk the floor, sample the exhibits, go to sessions and engage with individuals, some of the dynamics around the tablet market have gained additional clarity. But by no means would we want to claim there is understanding of this immature market.

CES Overview

We were last at CES in 2009 and it seems like an eternity. Notable changes include:

The exhibit space has shrunk. The Sands is gone. There is less space being used in the Hilton Convention Center.

Some large players are not here: HP and Dell, for example. The single largest missing company is Apple. Another company noticeably missing is Google.

The event is more diverse. CEA has created Tech Zones to highlight segments of the market, even including products for Apple and gamers.

PMA abandoned its February trade show in Las Vegas, due to competition with CES, and now CES has greater depth in photo equipment but Nikon is missing. CES is not a photo equipment show.

There is little here on social networks. If anything, CES is about published media and the products for consumption, not how and what consumers do to participate on networks.

The technical program is also more diverse but the quality is variable. The programs are not managed in the say way they are in many conferences.

In many respects CES has filled the void left by the demise of COMDEX, but at its core CES is about Consumer Electronics and not computing or communications. Yet, as a massive trade show centered on consumers, it remains a premier event, at least in the US.

It has become difficult to sustain these large venues in these economic times. The direction has been and continues to be towards more specialized events on a much smaller scale. Unmanageable size killed COMDEX and it remains to be seen if the size of CES will severely impact this venue. A downward trend is already evident but one must give CEA credit for being adaptable to the market dynamics.

140,000 individuals came to CES. This makes CEA happy but we wonder if this is a good sign for the event.

In walking the floor the first day one is struck by these impressions.

3D displays and even cameras are everywhere. One has a sense of desperation. With increasing penetration of HD television the industry must find a way to continue to flood the market with LCD panels and, if possible, increase or sustain the ASP. 3D is that hope. But the hype can only last so long. If consumers do not grab this technology it will rapidly fade. As one commentator said – this is the last year of 3D hype. If it does not stick this year it may not be around again.

Numerous times we have heard this described as the year of the tablet. Products are everywhere and, more importantly, individuals are carrying them. The space is dominated by Apple but its absence at CES makes the venue a wannabe forum.

Large home appliances are entering CES, in the large booths, as connected devices. But we regard most of this as no more than exploration.

A hot topic is connected TVs and TVs in many forms beyond passive watching. Again this is an attempt to differentiate a commodity product and stimulate sales.

There are really two shows here, not too dissimilar to COMDEX at the end. That is, on one side are the large CE companies with massive booths which drive the show-floor eye candy. But there are many small booths of companies from Hong Kong, China, Taiwan and Korea. It could be said that the start-ups are present here but we find the side-events such as ShowStoppers a better venue to see this aspect of the market.

The keynotes are an important event at CES. This is where the large players make major announcements and show a commitment to the market. As normal, Microsoft had its event the night before the opening of CES and on opening day it was Verizon.

It is not hard to get caught up in the visual spectacle of CES. One cannot help but be impressed by the massive booths which delight the eye. CES is not immune from the same forces of the market which drive the companies exhibiting here. It is clear that CES has changed in even the last few years but continued substantial changes will be required if it to sustain its position as a premiere large-scale conference venue.

Verizon Keynote

It was widely expected that Verizon would announce the iPhone on its wireless network at CES. But as the event approached, the speculation rapidly shifted to a focus on Android announcements. The message which resonated with this shift was – Steve Jobs would not let such a significant announcement happen outside of an event he does not control. Only a week later was the iPhone on Verizon announced in New York City.

Most of the keynote was about the Verizon network and how it will meet the needs of consumers in the future. A lot of time was spent with the Time Warner CEO Jeffrey Bewkes but this was about how Verizon could support the plans of Time Warner.

The significant element of the keynote was the demonstration by Google Android chief Andy Rubin of Honeycomb on the Motorola Xoom. It was fluid and well integrated. The desktop was well done. Maps looked like Google Earth with 3D buildings as one got to the ground. Email integration was excellent – gmail that is. Google has set a high bar for its entry into the tablet space.

Tablet Frustrations

In spite of the promise of Motorola’s Xoom and the interesting Microsoft tablet-like devices in its booth, one could not handle them. Promises of products in the coming months seemed hollow when the hot items were under glass.

In the Motorola booth we asked about distribution and Xoom will be only available on the Verizon network. While the Apple iPad has a 50-50 split between WiFi and 3G products Motorola has elected to segment its market with only one means of distribution. There is hardly an open market for products when it is buried in a 2 year contract with artificial pricing. Apple is much more transparent. Other companies are less likely to so restrict on tablet offerings but the connectivity to networks remains an area which will continue to evolve.

Tablets in Context

A conference session: The Great Slate Debate, provided an excellent context for the tablet market dynamics. Here are some of the points made.

Already the tablet market is saturated and vendors are seeking to differentiate their offerings. Freescale stated that there were 23 tablets being shown in its suite. One panel member said – How can we stay alive in this market?

Significant production is already underway. Freescale cited one tablet, which will retail at $149, is being currently produced at 300,000 units/month.

Tablets continue to be differentiated from Notebooks and Laptops in terms of use. Tablets are for consumption and Notebooks are for creation, has been the standard line. Yet, this notion eroded when tablets with 4 cores were discussed and how these have the compute power of many notebooks. Another characterization of tablets was that they typically are not on-the-go. If one views CES as a measure of this, one would certainly regard the notion of tablets being used in a physically confined environment as false. There is no easy way to carry them as a personal device like a phone. There is only one way to respond to all of this: It is impossible to have one view of tablets. This is a new computing category, with new form factors and a wide range of target markets from vertical to horizontal.

On multiple occasions the early and immature characteristics of the tablet market was described. One cannot assume what the markets looks like today will be the same tomorrow. It is changing too rapidly. Tablets are already surfacing in usual form factors and applications. One cited was a tablet which is a television viewing device in the living room which could morph to a large screen television controller. Tablets are not PCs and even the notion of the iPad as the defining tablet may be too narrow for this emerging category.

The ability to rapidly respond to the market is, in part, driven by two factors. First, Apple operates on a cadence of 1 year product cycles for its major offerings. This defines the markets for those products. The response to this cadence comes from companies in Asia who are accustomed to high market velocities. They are able to respond to what Apple does and still eke out gross margins within the life cycle of the Apple products.

A major advantage from the supply side is that Tablets, including the prospect of many of them in a family, will drive more screens in the home.

Toshiba will start with 10” tablets then expand the offering to 7” and 11” tablets. What OS is employed will be based on the tablet use case.

Barnes & Noble will begin to offer apps on its color Nook device. They find that “traditional” app developers need to be educated on how to deal with a focused vertical market – book readers. The message is that iPad applications are not necessarily best for those focused on reading books.

There is confusion in the OS offerings. Clearly Google dominates due to price and continued innovation. Honeycomb shows how it is driving the market. In spite of rumors of Linux-based offerings only a few have been found, including the Sharp Galapagos which runs a customized version of the Linux OS.

In spite of the “openness” of Android, Google carefully controls how the OS is used in the market. For example, when the OS is updated Google selects the combination of the hardware company, OEM, and CPU supplier. This is the anointing of the premier hardware platform and launch product. This becomes the first Google Certified device. Then, all following Android devices must get certified by Google. If a device is not, Google will deny access to the Google Marketplace.

Given the market hold which Google has on the current tablet market there was some discussion that Microsoft’s offering on ARM could open the market to more competition against Android.

Posted in Analysis, Events, Markets, News, Technology | Tagged , , , , , , , , , , , , , , , , , , , | Comments Off on CES 2011: Day 2

CES 2011: Day 1

Struggles in Mobile Devices

CES 2011 has 28 conferences within the event. These provide an opportunity to explore technologies and topics in more detail. One was the Smart Phone and Tablet Conference. It began on Wednesday January 5, 2011; one day before CES actually opens and ran 2 days. This is a report from the first day.

CES 2011, Las Vegas

CES 2011, Las Vegas

There were 5 sessions focused mostly on content to smartphones and tablets, with approximately 7 individuals on the panels for each session. Given that the event was organized by Digital Hollywood the emphasis centered on content. Some of the panels also had members from hardware companies, who seemed largely out of place. Overall the panel structure resulted in a disjointed poorly organized event. Not a single presentation was given and no panel member spoke individually. The result was just talk and some audience questions. From this, however, some interesting insights surfaced.

Many of the panel members represented content providers, from news to magazines to cable. They are all struggling to adapt to the new environments presented by the iPhone, iPad and Android. Little else, in terms of platforms, was talked about. The diversity of platforms cited was:

  • HTML – especially FLASH
  • iOS
  • Android

These platforms make it difficult for developers to create once and target for all platforms – in large part it means creation for each platform. Given the high development costs producers are being very cautious about adopting platforms outside of the big 3 above. They all asked for the “old Web” model where one could develop for one platform and it could play on all the browsers. Many cited that the ability to develop for one but target for all was essential, going forward. To them, it is seen as a technology issue.

Business models in flux. The premium subscription model was praised but few spoke of success. Over and over the free app was described as the entry. It is hoped that the forthcoming addition of micro-transactions by Apple will make the free model viable.

Another issue cited was app price elasticity. One producer stated that some copies were sold at $4.95 but as the price went to $2.95 the sales significantly increased. Now the app is free.

Examples of some of the best content cited were: Popular Science and Sports News (category). It was also asked – can these high cost content products have a positive ROI over the long term?

It is impossible to predict a priori what consumers will like and use.

Frequent references were made as to how content was disclosed on Facebook, Twitter  included, and stories WERE picked up by bloggers and the adoption of content accelerated. One reference was to how successful Twitter Parties are and when the audience was asked – do you know what a Twitter party is, only a few hands went up. The role that social networks are playing was described as “out-of-channel” distribution and marketing.

This social layer of consumer involvement involves the close coupling with consumer activities on social platforms. The problem is that consumers drive this space not the content providers.

Reference was made to a next major step – when applications can communicate between themselves. Another example of a future technology direction was in “gesture workflow” where all functions, including productivity, could be done with just a gesture interface.

On multiple occasions the dilemma of privacy and identity was cited. Complaints were made about the lack of app buyer information from Apple. On the other hand it was stated that once one registers with Google – privacy is gone. It was speculated that issues of privacy and the use of individual information will be a source of tension for the next 5 years.

Developers cited the value of Notifications. Apparently only a few apps use this and the ones that do, as mentioned on the panel, find this a valuable tool, especially to market their presence on the platform.

The fragmentation of the Android platform, due to various screen sizes, was stated to be a Google issue while it was just the opposite for Apple.

The control model of Apple was only cited in passing.

iPads were all over the room with the attendees.

Assessment

The environment was surreal. Apple was everywhere but not present. It controls the business existence of many and stands alone on its own terms.

Virtually every presenter was focused on the iPhone and iPad, but the emphasis placed on them was taken for granted. Apple has made all of this happen, the iPad is less than a year old and Apple is absent. The only way one sees Apple at CES is through the impact it has. Apple only plays by its own rules but remains omnipresent in absentia.

The issue was raised several times: what will be the long term impact of the tablet on the notebook? Most saw the tablet as a consumption device and the notebook as a creation device. One panel member was more forward-looking; the Mac Air is only a transition device. Watch when Air has a screen which rotates and lays flat and the screen has touch. This will bring iPad-like interaction to the notebook. It is then the convergence device. Bingo! Under these conditions Apple has set the bar again on the question – what mobile technology is relevant in the market?

At the same time, the stakes for any existing or new mobile device player without significant market share, is that they must have a very high value proposition for developers to support the platform. Just being different only means the ROI is difficult to achieve. Cited to illustrate this, in passing, were Nokia and Microsoft. Nokia is non-existent in the US and Microsoft has no market share. A discussion thread about Nokia and its smartphones was cut off by a moderator as being irrelevant.

Participants remain largely dependent on Apple, are caught in the tornado of iPad adoption, and just feeling their way as to what drives consumer adoption for content and apps. It’s a new world that did not exist a year ago. They are on the outside looking in.

Posted in Events, Markets, News, Technology | Tagged , , , , , , , , | Comments Off on CES 2011: Day 1

OFDM Tutorial

Frequency division multiplexing (FDM) is a technology that transmits multiple signals simultaneously over a single transmission path, such as a cable or wireless system. Each signal travels within its own unique frequency range (carrier), which is modulated by the data (text, voice, video, etc.).

Orthogonal FDM’s (OFDM) spread spectrum technique distributes the data over a large number of carriers that are spaced apart at precise frequencies. This spacing provides the “orthogonality” in this technique which prevents the demodulators from seeing frequencies other than their own. The benefits of OFDM are high spectral efficiency, resiliency to RF interference, and lower multi-path distortion. This is useful because in a typical terrestrial broadcasting scenario there are multipath-channels (i.e. the transmitted signal arrives at the receiver using various paths of different length). Since multiple versions of the signal interfere with each other (inter symbol interference (ISI)) it becomes very hard to extract the original information.

OFDM is sometimes called multi-carrier or discrete multi-tone modulation. It is the modulation technique used for digital TV in Europe, Japan and Australia.

Uses

DAB – OFDM forms the basis for the Digital Audio Broadcasting (DAB) standard in the European market.

ADSL – OFDM forms the basis for the global ADSL (asymmetric digital subscriber line) standard.

Wireless Local Area Networks – development is ongoing for wireless point-to-point and point-to-multipoint configurations using OFDM technology.

In a supplement to the IEEE 802.11 standard, the IEEE 802.11 working group published IEEE 802.11a, which outlines the use of OFDM in the 5.8-GHz band.

MIMO-OFDM

Multiple Input, Multiple Output Orthogonal Frequency Division Multiplexing is a technology developed by Iospan Wireless that uses multiple antennas to transmit and receive radio signals. MIMO-OFDM will allow service providers to deploy a Broadband Wireless Access (BWA) system that has Non-Line-of-Sight (NLOS) functionality. Specifically, MIMO-OFDM takes advantage of the multipath properties of environments using base station antennas that do not have LOS.

The MIMO system uses multiple antennas to simultaneously transmit data, in small pieces to the receiver, which can process the data flows and put them back together. This process, called spatial multiplexing, proportionally boosts the data-transmission speed by a factor equal to the number of transmitting antennas. In addition, since all data is transmitted both in the same frequency band and with separate spatial signatures, this technique utilizes spectrum very efficiently.

VOFDM (Vector OFDM) uses the concept of MIMO technology and is also being developed by Cisco Systems.

Other Versions of OFDM

WOFDM – Wideband OFDM, developed by Wi-Lan, develops spacing between channels large enough so that any frequency errors between transmitter and receiver have no effect on performance.

Flash OFDM – Flarion (Lucent/Bell Labs spinoff) has developed this technology, also called fast-hopped OFDM, which uses multiple tones and fast hopping to spread signals over a given spectrum band.

The OFDM Alliance Special Interest Group merged with the WiMedia Alliance in 2005.

Additional sources of information*

WiMedia Alliance
OFDM Receivers for Broadband-Transmission, Michael Speth
Spread Spectrum Scene
Telecommunications, Sean Buckley

Posted in Technology, Tutorials | Tagged , , , | 1 Comment

Video Compression Tutorial

Video Compression Technology

At its most basic level, compression is performed when an input video stream is analyzed and information that is indiscernible to the viewer is discarded. Each event is then assigned a code – commonly occurring events are assigned few bits and rare events will have codes more bits. These steps are commonly called signal analysis, quantization and variable length encoding respectively. There are four methods for compression, discrete cosine transform (DCT), vector quantization (VQ), fractal compression, and discrete wavelet transform (DWT).

Discrete cosine transform is a lossy compression algorithm that samples an image at regular intervals, analyzes the frequency components present in the sample, and discards those frequencies which do not affect the image as the human eye perceives it. DCT is the basis of standards such as JPEG, MPEG, H.261, and H.263.

Vector quantization is a lossy compression that looks at an array of data, instead of individual values. It can then generalize what it sees, compressing redundant data, while at the same time retaining the desired object or data stream’s original intent.

Fractal compression is a form of VQ and is also a lossy compression. Compression is performed by locating self-similar sections of an image, then using a fractal algorithm to generate the sections.

Like DCT, discrete wavelet transform mathematically transforms an image into frequency components. The process is performed on the entire image, which differs from the other methods (DCT), that work on smaller pieces of the desired data. The result is a hierarchical representation of an image, where each layer represents a frequency band.

Compression Standards

MPEG stands for the Moving PictureExperts Group. MPEG is an ISO/IEC working group, established in 1988 to develop standards for digital audio and video formats. There are five MPEG standards being used or in development. Each compression standard was designed with a specific application and bit rate in mind, although MPEG compression scales well with increased bit rates. They include:

MPEG-1
Designed for up to 1.5 Mbit/sec
Standard for the compression of moving pictures and audio. This was based on CD-ROM video applications, and is a popular standard for video on the Internet, transmitted as .mpg files. In addition, level 3 of MPEG-1 is the most popular standard for digital compression of audio–known as MP3. MPEG-1 is the standard of compression for VideoCD, the most popular video distribution format throughout much of Asia.

MPEG-2
Designed for between 1.5 and 15 Mbit/sec
Standard on which Digital Television set top boxes and DVD compression is based. It is based on MPEG-1, but designed for the compression and transmission of digital broadcast television. The most significant enhancement from MPEG-1 is its ability to efficiently compress interlaced video. MPEG-2 scales well to HDTV resolution and bit rates, obviating the need for an MPEG-3.

MPEG-4
Standard for multimedia and Web compression. MPEG-4 is based on object-based compression. Individual objects within a scene are tracked separately and compressed together to create an MPEG4 file. This results in very efficient compression that is very scalable, from low bit rates to very high. It also allows developers to control objects independently in a scene, and therefore introduce interactivity.

JPEG stands for Joint Photographic Experts Group. It is also an ISO/IEC working group, but works to build standards for continuous tone image coding. JPEG is a lossy compression technique used for full-color or gray-scale images, by exploiting the fact that the human eye will not notice small color changes.

JPEG 2000 is an initiative that will provide an image coding system using compression techniques based on the use of wavelet technology.

DV is a high-resolution digital video format used with video cameras and camcorders. The standard uses DCT to compress the pixel data and is a form of lossy compression. The resulting video stream is transferred from the recording device via FireWire (IEEE 1394), a high-speed serial bus capable of transferring data up to 50 MB/sec.

H.261 is an ITU standard designed for two-way communication over ISDN lines (video conferencing) and supports data rates which are multiples of 64Kbit/s. The algorithm is based on DCT and can be implemented in hardware or software and uses intraframe and interframe compression. H.261 supports CIF and QCIF resolutions.

H.263 is based on H.261 with enhancements that improve video quality over modems. It supports CIF, QCIF, SQCIF, 4CIF and 16CIF resolutions.

DivX Compression

Terms

Lossy compression – reduces a file by permanently eliminating certain redundant information, so that even when the file is uncompressed, only a part of the original information is still there.

ISO/IEC International Organization for Standardization – a non-governmental organization that works to promote the development of standardization to facilitate the international exchange of goods and services and spur worldwide intellectual, scientific, technological and economic activity.

International Electrotechnical Commission – international standards and assessment body for the fields of electrotechnology

Codec – A video codec is software that can compress a video source (encoding) as well as play compressed video (decompress).

CIF – Common Intermediate Format – a set of standard video formats used in videoconferencing, defined by their resolution. The original CIF is also known as Full CIF (FCIF).

QCIF – Quarter CIF (resolution 176×144)
SQCIF – Sub quarter CIF (resolution 128×96)
4CIF – 4 x CIF (resolution 704×576)
16CIF – 16 x CIF (resolution 1408×1152

Additional sources of information*

DataCompression.info

* The WAVE Report is not responsible for the content of external websites

Posted in Technology, Tutorials | Tagged , , | 3 Comments

Text-to-Speech Tutorial

Technology

Speech synthesis programs convert written input to spoken output by generating synthetic speech. These are often referred to as Text-to-Speech conversions (TTS).

There are several ways to perform speech synthesis:

  1. Record the voice of a person saying the required phrases
  2. The use of algorithms that split speech into smaller pieces. Often pieces are split into 35-50 phonemes (smallest linguistic unit). This decreases the quality though, due to the complexity of combining them once again in a fluent speech pattern.
  3. The most developed method is the use of diphones, which splits phrases not at the transition but at the center of the phonemes, which leave the transition intact. This results in 400 separate usable elements and a better quality product.

Performing speech synthesis with the methods above is said to be using concatenative processes. Concatenative TTS uses human quality wave files to generate the speech into a TTS string. These systems can be large in size and require lots of drive space to run, but offer a more natural sounding output.

Another method, synthesized TTS, creates speech by generating sounds through a digitized speech format. This output sounds more like a computer than a human, but can be run using just a few megabytes of space.

Products, whether concatenative or synthesized, are usually measured by their intelligibility, naturalness and test preprocessing capabilities (ability to convert acronyms into normal speech).

Additional sources of information*

Report edited by Ronald A. Coleet. al. with a section on TTS
Museum of Speech Analysis and Synthesis
Bell Laboratories Projects

* The WAVE Report is not responsible for content on external websites

Posted in Technology, Tutorials | 3 Comments

OLED Tutorial

Organic Light-Emitting Diode (OLED) Technology

An OLED is an electronic device made by placing a series of organic thin films between two conductors. When electrical current is applied, a bright light is emitted. This process is called electrophosphorescence. Even with the layered system, these systems are very thin, usually less than 500 nm (0.5 thousandths of a millimeter).

When used to produce displays, OLED technology produces self-luminous displays that do not require backlighting. These properties result in thin, very compact displays. The displays also have a wide viewing angle, up to 160 degrees and require very little power, only 2-10 volts.

OLED displays have other advantages over LCDs as well:

  • Increased brightness
  • Faster response time for full motion video
  • Lighter weight
  • Greater durability
  • Broader operating temperature ranges

Comparing Technologies

Liquid Crystal Displays (LCDs)

For comparison, LCDs, which are widely used today, are nonorganic, nonemissive light devices, which means they do not produce any form of light. Instead they block/pass light reflected from an external light source or provided by a back lighting system. The back lighting system accounts for about half of the power requirements for LCDs, which is the reason for their increased power consumption (over OLED technologies).

LCD production involves the same sort of layering technique used in OLED displays, with some modification. First there is the formation of electrodes on two glass substrates. Then the substrates are joined together and the liquid crystals are sealed within them. Backlights are used to spread light out by a thin light diffuser. Finally the system is placed into a metal frame.

Cathode Ray Tubes (CRTs)

Displays made from CRTs are produced using electron tubes in which electrons are accelerated by high-voltage anodes, formed into a beam by focusing electrodes, and projected toward a phosphorescent screen that forms one face of the tube. The electrons beam leaves a bright spot wherever it strikes the phosphor screen.

Pros and Cons

CRTs

  • Cost less and produce a display capable of more colors than LCD displays
  • CRTs also use emissive technology, meaning that they can provide their own light – this means you can view images from any angle

LCDs

  • LCDs have gained popularity due to their smaller, lighter form factor and their lower power consumption
  • Many users report lower eyestrain and fatigue due to the fact that LCD displays have no flicker
  • LCDs emit fewer low-frequency electromagnetic emissions than CRTs

Additional sources of information*

Oled-Info.com
Oled-display.net
How Stuff Works – LCDs

Companies Developing OLED Displays
Cambridge Display Technology
Universal Display Corporation
Dupont Displays – Olight

Posted in Technology, Tutorials | Tagged , , | 6 Comments

NanoManipulator

What Is Nanotechnology?

Nanotechnology is the science and engineering of the very small—molecular and atomic-scale materials and machine assembly. The name comes from the prefix nano-, meaning one-billionth, and refers to the scale of the objects: one billionth of a meter. Nanotechnology holds huge promise for those who can make it work (and make the economics work), because the ability to control activities and objects at this scale offers tremendous flexibility. An analogy is the difference between moving tennis balls with a bulldozer, or arranging them by hand, one at a time. Nanotechnology promises a by-hand manipulation of the smallest building blocks of materials.

One aspect of nanotechnology development is based on the capabilities of microscopic-sized machines. Proposed medical applications include cell-sized robots operating within the body to remove blockages in the heart, repair degnerative damage to joints, or deliver cancer drugs with cell-by-cell precision. At present this field exists only as pure research with the greatest achievements being simple gear sets and electric motors.

Another aspect of nanotechnology involves the study and engineering of materials themselves. Examples of this are the “bucky-balls” and “bucky-tubes”—configurations of carbon atoms into ball and tube shapes respectively. Commercial applications of these may make use of their electrical properties, or tremendous tensile strength. Another example might be the creation of custom “designer drugs,” whose physical characteristics are carefully shaped to bond only with certain types of cells.

Sensable Technologies Phantom

The Phantom, from Sensable Technologies, is a device that uses the sense of touch for input or output from a computer. Developed by Thomas Massie and Dr. Kenneth Salisbury at MIT in 1993, the Phantom allows users to experience and manipulate 3D data physically, by moving their hand. The Sensable website describes touch as the only “fully duplex” sense, because a person can send and receive information by touch at the same time. As the user manipulates the data using the articulated arm of the Phantom, they receive feedback in the form of resistance to certain movements. When used with an SPM as part of the NanoManipulator, the user will be able to “feel” the object under study.

More information:

3rd Tech: www.3rdtech.com
Sensable Technologies: www.sensable.com

*The WAVE Report is not responsible for content on external sites

Posted in Technology, Tutorials | Tagged | Comments Off on NanoManipulator

Firewire Tutorial

Firewire, also known as IEEE 1394, is a wired inter-device digital communication standard, providing data rates of up to 400 Mb (megabits) per second. The Firewire standard consists of a serial input/output port and bus, a copper cable capable of carrying both data and power, and the associated software. Its ability to transmit video or audio data in digital form at high speeds, reliably and inexpensively, over cable lengths of up to 14 feet, has made it a popular choice for connecting digital video devices to each other and to computers. The Firewire standard is supported by electronics companies such as Sony, Phillips, Panasonic, Canon, and JVC, as well as computer companies such as Apple, Microsoft, Compaq, and Intel, although many of these companies use the IEEE 1394 label for the technology.

Properties

The Firewire/IEEE 1394 standard has the following properties:

  • Consists of both hardware and software specifications
  • Completely digital—no conversion to analog
  • Data rates of 100, 200, or 400 Mb per second
  • Plug and play—connection is automatic once cable is plugged in
  • Hot plug-able—cables can be connected and disconnected while in use
  • Flexible—supports daisy-chain and branching cable configurations
  • Peer-to-peer—can connect digital video recorders (DVRs) to a computer or directly to each other
  • Scaleable—can mix 100, 200, or 400 Mb devices on single bus
  • Physically easy to use—no special terminators or device IDs to set
  • Physically small—thin cables
  • Inexpensive
  • Non-proprietary—licensing is open and inexpensive
  • Two data transfer types—asynchronous and isochronous
    • Asynchronous data transfer—The traditional request-and-acknowledge form of computer communication for sending and receiving data.
    • Isochronous data transfer—A continuous, guaranteed data transmission at a pre-determined rate. This allows the transmission of digital video and audio without expensive buffer memory.

History

In the mid 1990’s, Apple Computer invented the Firewire bus for local area networking. At the time it provided connection speeds of 100 Mb per second, although speeds of up to 1000 Mb per second were planned for the future. The standard was soon embraced by computer companies such as Intel and Microsoft, who saw the advantage of the Firewire/IEEE 1394 system over the established USB connection standard for applications such as connecting storage and optical drives. Universal Serial Bus (USB) has a connection speed of only 12 Mb per second. As electronics companies began producing digital video cameras, they too looked to the Firewire standard for connectivity, to maintain an all-digital path for signal quality in digital video editing.

In late 1998, Apple, which held the primary IP for Firewire, began charging a licensing fee of $1 per port–so a hard drive with 2 Firewire ports would cost an extra $2 per unit to construct. While a nuisance in the thriving PC industry, the additional fees would have seriously hampered the future of Firewire in the electronics industry, which typically operates on very thin margins. By the end of 1999, however, the standard was operating under a general licensing group, known as 1394LA, that holds the essential patents relating to the Firewire/IEEE 1394 standard in trust. This is similar to the way in which the patents regarding the MPEG video compression standard are licensed. Companies can now license the IEEE 1394 standard for $0.25 per finished unit, regardless of the number of actual 1394 ports in the unit. The term Firewire, however, remains a trademark of Apple.

The position of Firewire has continued to decline as the performance of USB rises. Apple is not installing Firewire in its latest computers.

More information:*
1394 Trade Association
1394LA Patent Portfolio Group

*The WAVE Report is not responsible for content on additional sites

Posted in Technology, Tutorials | Tagged | Comments Off on Firewire Tutorial