CREU 2012

CREU project

American Sign Language, or ASL, is the preferred language of people born deaf in the United States. Signers communicate through the use of hand and body movement along with facial expression. It is important to recognize that ASL is not a direct gestural expression of English; it is its own unique language. The Deaf community, a minority group of about 500,000 in both the United States and Canada, views the use of ASL as an integral part of their culture where they grow, develop and share similar experiences.

ASL is used by both the totally deaf and the hard of hearing. Although some of them can read, speak, or read lips in order to successfully communicate with the hearing, this proportion of the community is small, and is mainly composed of those who lost their hearing later in life. Most often an interpreter is needed in order for a successful exchange to take place between Deaf and hearing people. Unfortunately, interpreters are expensive to hire, and are often unavailable on short notice or for short, simple exchanges. For these scenarios we are creating a digital interpreter that can use voice recognition to translate speech into ASL, providing the necessary bridge between the hearing and the Deaf communities.

The current process for creating ASL animations is slow and cumbersome. An artist can design words or sentences using custom interfaces with a 3D graphics program, however viewing the results requires offline rendering, which can take several minutes. This delay makes fine tuning position and motion a tedious and time-consuming process. Our project will adapt our avatar so that a visually acceptable and accurate real time preview is possible. As a result we will eliminate the need for offline rendering as a part of the sign or expression generation process. Making the development of signs easier should help provide more accurate sentences and expressions in greater quantity.

Problem 1: What aspects of computer animation cause the greatest amount of delay?

After reviewing the computational cost for a) drawing polygons, b) texturing, including transparency and c) lighting, we will analyze how these costs apply to our avatar.

Problem 2: What optimization strategies can we use to speed the real-time preview without degrading the appearance of the avatar?

We will explore several strategies for speeding real-time preview. One is to leverage level of detail - if the avatar being viewed at a distance, perhaps some fine detail could simply be hidden and not rendered. Another approach would be to simplify the texturing. Another possibility is to substitute a shader that obviates the lighting computation. It may be necessary to create alternate avatars that have a simpler polygon structure.

Problem 3: Which optimization strategies will be most effective? Are there some circumstances where one strategy is preferable to another?

The first evaluation will be to measure the performance of each optimization strategy. The second type of evaluation will be feedback on the avatar's appearance. The students will work with other team members who will test the optimized avatars while making animations of ASL. Based on their feedback we will decide on the optimum tradeoff between performance and appearance of the avatar.


Mid Term Progress

Problem 1: What aspects of computer animation cause the greatest amount of delay?

Polygon count is a large issue in our signing avatar. We wrote a script to determine the number of polygons in each component of the model. We found the initial model has more than 67,000 polygons. The parts of the model with the highest number of polygons are the lashes and the teeth.

Reducing polygon count is most effective in components with both a high polygon count and only rigid rotations. Having only rigid rotations reduces the time and difficulty of reskinning after reducing the number of polygons in the component. The Lashes, ToungeTeeth, and Hair meet both criteria.

Problem 2: What optimization strategies can we use to speed the real-time preview without degrading the appearance of the avatar?

In our first attempt to reduce polygon we applied 3DS Max's ProOptimizer modifier. This produced only limited success. The modifier produced a jagged profile in the hair and shading distortions in the teeth.

Initial
After ProOptimizer
iniital hair shape
hair after polygon reduction
Initial
After ProOptimizer
original teeth shape
teeth after polygon reduction

Because the lashes are composed up of many different lines that are then rendered as polygons the ProOptimizer modifier cannot easily be applied.

We are now working on remodeling the lashes. To tackle this, we thought to create a single polygon surface to stand in for the individually modeled lines. From there, an opacity map will be applied to give the appearance of individual lashes.

Old lashes
Current model of new lashes
original lashes
draft of new lash shape

We would also like to remodel the teeth. Because the teeth are not visible that often we believe that a much simpler geometry is possible. Instead of modeling each tooth we will create two half cylinders in the correct shape and use texturing and bump mapping to give the appearance of teeth.

Problem 3: Which optimization strategies will be most effective? Are there some circumstances where one strategy is preferable to another?

The quality of the graphics card has a high impact on the speed of the computer's playback. We bought two new NVIDIA Quadro graphics cards for computers in our labs and were each able to install one. We will be downloading the 3DS Max performance driver that can greatly increase the performance of 3DS Max.

Here you can see images of the computers and the new and old graphics cards during the installation process.

Old Graphics Card
New Graphics Card
Old Graphics Card
New Graphics Card

Study conducted by: Marie Stumbo and Farah Thomas. Their blogs are:

Research supervisors: Dr. Rosalee Wolfe and Dr. John C. McDonald

The CREU project is sponsored by the Computing Research Association Committee on the Status of Women in Computing Research (CRA-W) and the Coalition to Diversify Computing (CDC). Funding for this project is provided by the National Science Foundation.