2012 IEEE International Conference on Multimedia and Expo Workshops

Creative transformations of personal photographs Yi Wu∗ , Kalpana Seshadrinathan∗ , Wei Sun∗ , Maha El Choubassi† , Joshua Ratcliff∗ and Igor Kozintsev∗ ∗ Intel Corporation, 2200 Mission College Blvd, Santa Clara, CA, USA † Computer Science Department, American University of Beirut, Beirut, Lebanon

this method can enable a number of creative and entertaining use of camera images. For example, users can create a group photo with their families or friends who are not necessarily all at the same location or available at the same time. Users can also create group photos of themselves with celebrities, other objects of interest or in front of a new location of their choice. Users can create photo-montages, photo-collages or photo time-series from multiple segmented images to enable new representations of their personal media. Users can also create playful and imaginative representations of themselves and their friends by embedding themselves in a virtual world, where they engage in virtual activities such as playing a game. In this paper, we propose a system that can perform segmentation of foreground objects from camera images, which can be used by people or application developers to enable creative expression using personal media. We assume a static background whose 3D model can be acquired in advance offline. A comprehensive discussion of 3D acquisition can be found elsewhere [3], [4], [5], [6]. Here, we focus on the automatic localization of camera images taken anywhere within the known 3D environment and extracting foreground objects from this image. Localization is performed by comparing multiple pre-captured views of the 3D environment against the uncalibrated camera image and our approach does not require the pre-captured views to be taken from the same viewing position and angle as the input image, eliminating the time and effort of manual setup and coordination. Following localization, selected views of the 3D environment are warped to align with the camera image. These images are then colormapped and compared against the camera image to segment the foreground content which can then be augmented onto any chosen scene, whether virtually created or captured by cameras. Our method to segment foreground content is fully automatic and unsupervised, assumes only a few pre-captured views of the 3D environment, is robust to viewpoint changes and makes no assumptions regarding the foreground object. Our system works with mobiles devices with and without geo-location and orientation sensors. The sensors, when available, help speed up camera localization by constraining the search space. The visual information from captured pictures are used to precisely localize and estimate the pose of the the camera. Segmentation of the foreground image is performed purely based on visual processing. We present an

Abstract—The popularity of mobile photography paves the way to create new ways of viewing, interacting and enabling a user’s creative expression with personal media. In this paper, we describe an instantaneous and automatic method to localize the camera and enable segmentation of foreground objects such as people from an input image, assuming knowledge of the environment in which the image was taken. Camera localization is performed by comparing multiple views of the 3D environment against the uncalibrated input image. Following localization, selected views of the 3D environment are aligned, color-mapped and compared against the input image to segment the foreground content. We demonstrate results using our proposed system in two illustrative applications: a virtual game played between multiple users involving virtual projectiles and a group shot of multiple people who may not be available simultaneously at the same time or place created against a background of their choice. Keywords-segmentation, camera pose, homography, color mapping, alignment.

I. I NTRODUCTION With rapid improvements in the quality of camera imaging on mobile devices, mobile photography is becoming more and more popular. Several applications such as geomapping of personal media, augmented reality and gaming are growing rapidly in consumer markets and typically utilize information from various sensors on the device (GPS, accelerometer, gyroscope etc.) in addition to the camera. State-of-the-art imaging applications such as geo-mapping pin the thumbnail of a picture on a map. In the area of augmented reality, the main focus has been on augmenting computer graphics generated content to provide additional information/recommendations to the user regarding the scene in front of them[1], [2]. However, there have been relatively fewer advances in new ways of viewing, interacting and enabling a user’s creative expression with camera images and current mobile applications typically display images in a two-dimensional slideshow fashion. Images from mobile cameras can serve as input to enable richer media applications, where users can express their creativity and imagination. Indeed, there have been a number of fun photo editing applications such as face morphing that have gained popularity in various app stores. In this paper, we describe an instantaneous and automatic photo editing method to enable segmentation of foreground objects such as people from a camera image, assuming knowledge of the environment in which the image was taken. We believe that 978-0-7695-4729-9/12 $26.00 © 2012 IEEE DOI 10.1109/ICMEW.2012.87

465

Visual features from background images

Input Image

Visual Features Extraction

Alignment

Homography Estimation

Camera Pose Estimation

To Rendering Engine

Color Mapping

Segmentation

To Rendering Engine

Figure 2: Vision Algorithms Overview.

camera pose estimation; and in alignment of the planar structures of the input image to those in the background images. Following alignment, color mapping is performed to account for color differences between the input and background images due to varying camera sensors and lighting conditions. Finally, the aligned and color-mapped images are used to perform segmentation. The camera pose estimates and the segmentation masks are the outputs of the vision algorithms that are transmitted to the rendering engine to enable new ways of composing and viewing the images from the mobile device.

Figure 1: System Overview.

overview of our proposed system in Section II, followed by details of our algorithms in Section III. Section IV presents the results and Section V concludes the paper. II. S YSTEM OVERVIEW Our system is illustrated in Fig. 1. We assume that the environment is static and its 3D geometry is known a priori. The 3D geometry can be obtained using laser scanning, passive stereo imaging, structured lighting or time-of-flight sensing technology. Many algorithms have been developed for indoor 3D reconstruction [3], [4]. Companies such as Navteq and Earthmine start providing 3D map services with geometry of outdoor scenes [7]. We assume urban and indoor environments that typically consist of building facades, ground and ceiling can be approximated using a set of planar structures [8]. During the offline pre-processing, multiple views of each planar structure of the background environment are captured. Textures from these background images are registered to the corresponding planar structures, together with its 3D geometry, and stored on servers in the cloud. At run time, an uncalibrated input image from a mobile device is compared to the stored background geometry and textures by the vision algorithms module to estimate camera pose and to separate foreground objects from background environment. The results are then sent to the rendering engine, either on the mobile device or on a different computer, for visualization and user interaction.

A. Visual Features Extraction State-of-the-art candidates for visual feature extraction and descriptor generation include SIFT [9] and SURF [10]. Our system uses SURF for its tradeoff between speed and accuracy. Note that visual features can be extracted from background planar structures offline during pre-processing. B. Homography Estimation The visual features extracted from an input image I are matched against features from all available background images [9]. The search space of background images can be narrowed if GPS or compass information is available from the mobile device to speed up the matching process. If the number of matching points between I and a background image is above a pre-defined threshold, the background image is considered a matching candidate. As each background image is associated with a planar structure in the environment, merging all the matching background candidates gives us the set of m planar structures that I contains. For each planar structure Ii (i = 1...m) in I, the matching background candidate with the highest number of matching points is chosen as the corresponding background image Bi . For each resulting pair of Ii and Bi , we use RANSAC to compute a homography Hi that maps the matched features from Bi to Ii using their spatial correspondence [9]. For a planar structure, a homography Hi is a non-singular 3 × 3 matrix that maps a 2D point xBi in Bi to its corresponding 2D point xIi in Ii [11]: xIi = Hi · xBi .

III. V ISION A LGORITHMS The various stages of processing undergone by an input image in the vision algorithms module are shown in Fig. 2. The input image from the mobile device first undergoes visual feature extraction. The extracted visual features are used to perform homography estimation, where multiple homographies corresponding to the multiple planar structures in the input image are computed. These homographies are then used to calculate the position and orientation of the camera on the mobile device in the 3D world during

C. Camera Pose Estimation As the 3D geometry and the image textures of planar structures in the environment are known, a projective trans-

466

formation PBi that maps from any 3D point Xi on a planar structure of the environment to its corresponding 2D point xBi in a background image can be pre-computed offline using xBi = PBi · Xi . We then have x I i = Hi · P B i · X i = P I i · X i .

(1)

Therefore, we can calculate a projective transformation PIi = Hi · PBi that maps from a 3D point Xi to its corresponding 2D point xIi in the input image. Once PIi is known, we can compute any xIi given corresponding Xi using Eq. 1. A projective transformation can be decomposed as PIi = K[R|T ] [11]. Here, K is a 3 × 3 matrix determined by the intrinsic parameters of the camera such as the focal length and optical center, which can be obtained either from the camera specifications or via a standard camera calibration routine [12]. R and T are the camera pose parameters representing a rotation and a translation with respect to a pre-defined 3D reference coordinate system. Hence, we can rewrite Eq. 1 as xIi = K[R|T ] Xi . Given pre-measured or computed Xi , xIi , and K, a least-squares minimization process can be employed to compute R and T , which specify the position and orientation of the camera during the capture.

(a) Input image I

(b) Matched background Bi

(c) Aligned background Bi

(d) Projection mask MiP

Figure 3: Alignment.

D. Alignment Given the homography Hi estimated previously, the background image texture Bi of each planar structure can be projected onto the planar structure in the input image to generate an aligned background texture Bi , xBi = Hi · xBi . Fig. 3 illustrates the alignment process. Fig. 3a shows the input image I from the camera. Fig. 3b shows the best matched background texture Bi for one planar structure in I. Fig. 3c shows the aligned background texture Bi and Fig. 3d shows the binary projection mask MiP that represents the area occupied by the ith planar structure in the input image.

(a) Adjusted projection mask MiP 

(b) Masked input image Ii

(c) Masked background Bi

(d) Colormapped background Bi

Figure 4: Color mapping.

E. Color Mapping Ideally, the foreground within Ii can be extracted by differencing the input image Ii and its aligned background Bi [13]. However, since the input and background images may be captured using different cameras under different lighting conditions with different camera poses, color mapping is needed to adjust the colors in Bi to match those of Ii . We first need to eliminate the influence of the foreground object that only appears in the input image to improve color mapping accuracy. Hence, we compute the difference between Ii and Bi within the projection mask MiP . A 50% theshold is applied to the resulting difference image to coarsely extract the foreground and remove it from MiP to generate an adjusted projection mask MiP  , as in Fig. 4a. We then estimate color mappings from Bi to Ii within the adjusted projection mask MiP  . Fig. 4b and Fig. 4c show the pixels within MiP  for the input image Ii and the

background texture Bi respectively. A histogram of all the intensity values in Ii that correspond to each intensity value in Bi is computed for each of the three color channels. The color mapping function for each channel is then estimated by automatically enforcing the above mapping to be monotonic. Finally, the adjusted color mappings are applied to Bi as lookup tables to create aligned and colormapped background image texture Bi as shown in Fig. 4d. To ensure the robustness of color mapping to noise, a Gaussian low pass filter is applied to both Ii and Bi before performing color mapping. F. Segmentation Our segmentation algorithm combines color information with binary image processing techniques to compute a

467

(a) Dual thresholded mask MiF

(b) Refined mask MiF 

(c) Final mask

(d) Segmented result

background image Bi . Background pixels in a local area are added to the foreground mask when their statistics differ significantly between the input and background images. Indeed, similar methods have been applied to alphamatting and segmentation problems in the literature [15]. Our method differs from Bayesian alpha matting in two ways. First, we propose a novel statistical criterion for mask refinement when both input and background images are available. Second, our method is very fast as it does not involve iterative optimization at each pixel and uses simple statistical models. Consider a local window of size n × n at a pixel in the mask MiF that is labeled foreground. Let s represent the window-sized neighborhood in Ii and let t represent the corresponding neighborhood in Bi . Let N (μ, σ 2 ) denote the normal distribution with mean μ and standard deviation σ. We model the pixels in s and t using normal distributions specified as N (μs , σs2 ) and N (μt , σt2 ) respectively. μs , σs and μt , σt can be estimated from s and t respectively using maximum likelihood estimation. Consider any pixel in s that is labeled background in MiF and denote it using sj , with the corresponding pixel from t denoted using tj . We define two measures of statistical dispersion at this pixel location: one denoted as ds which assumes that sj and tj arise from the input image distribution and the other denoted as dt which assumes that they arise from the background image distribution. When the input and background statistics are similar which occurs in background regions, the two measures of dispersion are expected to be small. Both measures of dispersion are expected to be large in regions where the input image contains foreground regions. We define the measures of dispersion as the probability of the range of values between the foreground and background pixels:   ds = CDFN (μs ,σs2 ) [sj ] − CDFN (μs ,σs2 ) [tj ] (3)     dt = CDFN (μt ,σt2 ) [sj ] − CDFN (μt ,σt2 ) [tj ] (4)

Figure 5: Segmentation.

foreground mask for the input image. The segmentation algorithms consists of three main steps: dual-thresholding to produce a binary foreground mask, mask refinement using statistical analysis of input and background images and mask post-processing to produce the final segmentation mask. We compute differences between each planar structure in the input image Ii and its aligned and colormapped background texture Bi . We merge the differencing results for all the planar structures contained in the input image to  create a difference map D = ∩N i=0 |Ii −Bi |. Segmentation by thresholding is very sensitive to the threshold value, which results in a trade-off between retaining excessive background pixels and removal of foreground pixels. We utilize dualthresholding [14] on D to overcome these drawbacks of thresholding. The basic principle of dual-thresholding is to first threshold D using a high and a low threshold to produce two sets of segmented binary regions Γh and Γl . The binary foreground mask M F is then created by selecting those regions in Γl that overlap Γh : M F = {γl ∈ Γl | ∃γh ∈Γh γl ∩ γh = 0},

where CDFN (μ,σ2 ) denotes the cumulative distribution function of the normal distribution N (μ, σ 2 ), and | · | denotes the absolute value. For full color images, the same model is applied separately on all three color channels and dispersion is computed using the maximum of the individual dispersions in the three channels. When both dt and ds exceed a certain threshold, the pixel location of sj is labeled as foreground in the refined mask MiF  . Note that since the dispersion measure is defined as a probability, it lies between 0 and 1 and we used a threshold of 0.5. We found the performance of this method to be superior to simple likelihood ratio tests between corresponding pixels from the input Ii and background Bi , since the likelihood ratio is more sensitive to bias in the estimation of local probability distributions from the input and background images. We repeat the refinement process on pixels newly added to the

(2)

where γl and γh represent segmented binary regions in Γl and Γh respectively. An example result is shown in Fig. 5a. Within each adjusted projection mask MiP  of the input image, we perform mask refinement on the foreground mask MiF based on statistical analysis of the input and background images to generate a refined mask MiF  . Mask refinement is based on the intuition that foreground regions tend to be localized and well connected and pixels neighboring known foreground pixels are likely to be foreground pixels as well. We analyze the statistics of local areas surrounding known foreground pixels in both input Ii and aligned colormapped

468

Vision Algorithms Visual feature extraction Homography estimation Camera pose estimation Alignment Color mapping Segmentation

Runtime (s) 0.28 1.11 0.0017 0.093 0.025 0.24

Table I: Performance (a)

foreground mask until a limit on the number of added pixels is reached. A refined foreground mask MiF  is shown in Fig. 5b. In the final step, we perform post-processing of the foreground masks using boundary artifact removal and connected components analysis (CCA). First, the refined binary submasks MiF  are merged to obtain a single foreground mask M F  . We then apply a median filter to this mask to remove any possible boundary artifacts between planar segments in the input image caused by merging. Next, CCA is performed on this mask to obtain all the components in the input image [16]. Thresholding based on the area of each component is used to filter out and remove small components; the remaining components are filled to generate a better connected foreground mask. A final mask is shown in Fig. 5c with the segmented image in Fig. 5d. The segmented image is then output to the rendering engine for creative visualizations.

(b)

Figure 6: (a) Virtual game. (b) Virtual groupshot.

segmented foreground user images from our system were then overlaid in a 3D environment and rendered as a stop motion animation, along with an animated sequence of flying projectile, such as birds. This creates a visualization of one user attacking other users (his friends or even himself!) with a basket of birds. Fig. 6a shows one frame from the rendered virtual game and the full video sequence is submitted as supplemental material. Virtual group shot: This application enables users to create a group photo with their friends who are not available at the same time or place. Examples could be a class reunion photo or a collage created from multiple photos of different people against a popular landmark that they visited at different times. Fig. 6b shows an example of a virtual group shot that was created using our system at the public event.1 Users captured photos against the 3D environment at different time instants. The images of people segmented by our system were placed in the 3D environment at the location where each photo was taken, based on the camera pose our system estimated. It is clear from the images in Fig. 6 that the segmentation results are not perfect and that there are problems where colors on people’s clothing match the colors in the background image. However, during the event, we found that a majority of the users did not have a problem with these minor segmentation issues and enjoyed playing the virtual game. This was slightly surprising and we believe that there is room for imperfections in technology, especially in the personal entertainment space, as long as the usage of the application is compelling. Table I shows the runtime of each component in the visual algorithms on a Core i7 desktop at 3.33GHZ

IV. R ESULTS We demonstrated our system with real user content at Intel Developer Forum and the response to the automatic photo editing and rendering system was overwhelmingly positive. The 3D environment of our event center was created beforehand. The entire space is represented by multiple planar structures and we captured four images from different view angles for each planar structure. During the event, users were asked to take photos of themselves or their friends/colleagues using mobile devices that were handed out to them. The captured photos were sent via a wireless network to a server where the camera pose was estimated and foreground objects such as people were segmented. Users were then able to view their personal media in two different representations outlined below. They showcase the possible usages our system can enable. Virtual game: The purpose of this application is to enable users to create playful representations from captured photos where they play a role-playing game with themselves or other people. We asked three users to pose for a succession of five camera shots each as though they were playing an action of a role. One user was asked to pose as if he was stepping on something to launch a projectile, a second user was asked to pose as if he was shielding himself from being attacked and the third user was asked to pose as if he was being hit by a projectile launched by the first user. The

1 For

469

privacy reason, we blurred faces in the figure.

CPU frequency. The overall runtime is around 1.6s. The fast processing speed of our proposed method makes interactive applications possible, e.g., the virtual game. Besides, most of the runtime was spent on matching the input image to the known 3D environment. With a more efficient indexing mechanism, we can potentially speed up the overall process further.

[4] S. M. Seitz, B. Curless, J. Diebel, D. Scharstein, and R. Szeliski, “A comparison and evaluation of multi-view stereo reconstruction algorithms,” in IEEE Computer Vision and Pattern Recognition, 2006. [5] Y. Cui, S. Schuon, D. Chan, S. Thrun, and C. Theobalt, “3d shape scanning with a time-of-flight camera,” in IEEE Computer Vision and Pattern Recognition, 2010. [6] F. Amzajerdian, D. F. Pierrottet, L. B. Petway, G. D. Hines, and V. E. Roback, “Lidar systems for precision navigation and safe landing on planetary bodies,” Imaging, 2011.

V. C ONCLUSIONS AND F UTURE W ORK In this paper, we presented a system to segment foreground objects from a known 3D environment that is represented as a composition of planar surfaces containing textures. In our system, an input image acquired from a mobile device is used to estimate the location and orientation of the camera when the image was taken with the help of known images of the background. Camera pose and the known background images are used to align, colormap and segment the foreground object from the image. Our segmentation method, unlike blue screen matting, handles textured backgrounds and is fully automatic and unsupervised, while commercial tools such as Photoshop color replacement and QuickSelect require significant user interaction to generate accurate segmentation results. We demonstrated two applications of the system. In one application, our system renders an interactive and fun game played by multiple people in a virtual world. In another application, multiple users who are not available at the same time or place simultaneously can take a group photo using our system. We believe that these are just the tip of the iceberg in terms of enabling users’ creative expression and developing fun and interactive applications using personal media. Apparently our segmentation results are not perfect. For example, better color mapping can be designed to alleviate the system’s sensitivity to differences in cameras and lighting. A more fundamental limitation of our system is that our segmentation method relies primarily on color information, which cannot distinguish input and background regions with matching colors. In the future, we would like to consider other cues such as texture consistency, especially within the statistical refinement framework, to improve segmentation. We would also like to utilize a hierarchical segmentation approach where higher level algorithms such as normalized cuts that utilize the low level color attributes can improve segmentation accuracy [17], [18].

[7] G. Baatz, K. Koser, D. Chen, R. Grzeszczuk, and M. Pollefeys, “Handling urban location recognition as a 2d homothetic problem,” in European Conference on Computer Vision (ECCV), 2010. [8] A. Flint, D. Murray, and I. Reid, “Manhattan scene understanding using monocular, stereo, and 3d features,” in International Conference on Computer Vision (ICCV), 2011. [9] D. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, 2004. [10] H. Bay, T. Tuytelaars, and L. Van Gool, “SURF: Speeded Up Robust Features,” Lecture Notes in Computer Science, vol. 3951, p. 404, 2006. [11] V. Lepetit and P. Fua, “Monocular model-based 3d tracking of rigid objects: A survey,” Foundations and Trends in Computer Graphics and Vision, pp. 1–89, 2005. [12] Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 22, no. 11, pp. 1330–1334, 2000. [13] Y. Ivanov, A. Bobick, and J. Liu, “Fast lighting independent background subtraction,” International Journal of Computer Vision, vol. 37, no. 2, pp. 199–207, 2000. [14] W. Sun and S. P. Spackman, “Multi-object segmentation by stereo mismatch,” Machine Vision and Applications, vol. 20, no. 6, pp. 339–352, 2008. [15] Y.-Y. Chuang, B. Curless, D. H. Salesin, and R. Szeliski, “A bayesian approach to digital matting,” in IEEE Computer Vision and Pattern Recognition, 2001. [16] A. Rosenfeld and J. L. Pfaltz, “Sequential operations in digital picture processing,” J. ACM, vol. 13, no. 4, pp. 471–494, 1966.

R EFERENCES

[17] J. Wang and M. F. Cohen, “Image and video matting: a survey,” Found. Trends. Comput. Graph. Vis., vol. 3, no. 2, pp. 97–175, 2007.

[1] Layar. [Online]. Available: http://www.layar.com

[18] J. Shi and J. Malik, “Normalized cuts and image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 22, no. 8, pp. 888–905, 2000.

[2] Wikitude. [Online]. Available: http://www.wikitude.org [3] S. Izadi, R. Newcombe, D. Kim, O. Hilliges, D. Molyneaux, S. Hodges, P. Kohli, A. Davison, and A. Fitzgibbon, “Kinectfusion: Real-time dynamic 3d surface reconstruction and interaction,” in ACM Siggraph, 2011.

470

Creative Transformations of Personal Photographs - IEEE Xplore

Computer Science Department, American University of Beirut, Beirut, Lebanon. Abstract—The popularity of mobile photography paves the way to create new ...

614KB Sizes 2 Downloads 217 Views

Recommend Documents

IEEE Photonics Technology - IEEE Xplore
Abstract—Due to the high beam divergence of standard laser diodes (LDs), these are not suitable for wavelength-selective feed- back without extra optical ...

wright layout - IEEE Xplore
tive specifications for voice over asynchronous transfer mode (VoATM) [2], voice over IP. (VoIP), and voice over frame relay (VoFR) [3]. Much has been written ...

Device Ensembles - IEEE Xplore
Dec 2, 2004 - time, the computer and consumer electronics indus- tries are defining ... tered on data synchronization between desktops and personal digital ...

wright layout - IEEE Xplore
ACCEPTED FROM OPEN CALL. INTRODUCTION. Two trends motivate this article: first, the growth of telecommunications industry interest in the implementation ...

Evolutionary Computation, IEEE Transactions on - IEEE Xplore
search strategy to a great number of habitats and prey distributions. We propose to synthesize a similar search strategy for the massively multimodal problems of ...

I iJl! - IEEE Xplore
Email: [email protected]. Abstract: A ... consumptions are 8.3mA and 1.lmA for WCDMA mode .... 8.3mA from a 1.5V supply under WCDMA mode and.

Gigabit DSL - IEEE Xplore
(DSL) technology based on MIMO transmission methods finds that symmetric data rates of more than 1 Gbps are achievable over four twisted pairs (category 3) ...

IEEE CIS Social Media - IEEE Xplore
Feb 2, 2012 - interact (e.g., talk with microphones/ headsets, listen to presentations, ask questions, etc.) with other avatars virtu- ally located in the same ...

Grammatical evolution - Evolutionary Computation, IEEE ... - IEEE Xplore
definition are used in a genotype-to-phenotype mapping process to a program. ... evolutionary process on the actual programs, but rather on vari- able-length ...

SITAR - IEEE Xplore
SITAR: A Scalable Intrusion-Tolerant Architecture for Distributed Services. ∗. Feiyi Wang, Frank Jou. Advanced Network Research Group. MCNC. Research Triangle Park, NC. Email: {fwang2,jou}@mcnc.org. Fengmin Gong. Intrusion Detection Technology Divi

striegel layout - IEEE Xplore
tant events can occur: group dynamics, network dynamics ... network topology due to link/node failures/addi- ... article we examine various issues and solutions.

Digital Fabrication - IEEE Xplore
we use on a daily basis are created by professional design- ers, mass-produced at factories, and then transported, through a complex distribution network, to ...

Iv~~~~~~~~W - IEEE Xplore
P. Arena, L. Fortuna, G. Vagliasindi. DIEES - Dipartimento di Ingegneria Elettrica, Elettronica e dei Sistemi. Facolta di Ingegneria - Universita degli Studi di Catania. Viale A. Doria, 6. 95125 Catania, Italy [email protected]. ABSTRACT. The no

Device Ensembles - IEEE Xplore
Dec 2, 2004 - Device. Ensembles. Notebook computers, cell phones, PDAs, digital cameras, music players, handheld games, set-top boxes, camcorders, and.

Fountain codes - IEEE Xplore
7 Richardson, T., Shokrollahi, M.A., and Urbanke, R.: 'Design of capacity-approaching irregular low-density parity check codes', IEEE. Trans. Inf. Theory, 2001 ...

Multipath Matching Pursuit - IEEE Xplore
Abstract—In this paper, we propose an algorithm referred to as multipath matching pursuit (MMP) that investigates multiple promising candidates to recover ...

Fepstrum Representation of Speech - IEEE Xplore
ABSTRACT. Pole-zero spectral models in the frequency domain have been well studied and understood in the past several decades. Exploiting the duality ...

Optimized Software Implementation of a Full-Rate IEEE ... - IEEE Xplore
Hardware implementations are often used to meet the high-data- rate requirements of 802.11a standard. Although software based solutions are more attractive ...

Capacity of 3D Erasure Networks - IEEE Xplore
Jul 12, 2016 - Abstract—In this paper, we introduce a large-scale 3D erasure network, where n wireless nodes are randomly distributed in a cuboid of nλ × nμ ...

Characterization of CMOS Metamaterial Transmission ... - IEEE Xplore
IEEE TRANSACTIONS ON ELECTRON DEVICES, VOL. 62, NO. 9, SEPTEMBER 2015. Characterization of CMOS Metamaterial. Transmission Line by Compact.

Privacy-Enhancing Technologies - IEEE Xplore
filling a disk with one big file as a san- ... “One Big File Is Not Enough” to ... analysis. The breadth of privacy- related topics covered at PET 2006 made it an ...

Binder MIMO Channels - IEEE Xplore
Abstract—This paper introduces a multiple-input multiple- output channel model for the characterization of a binder of telephone lines. This model is based on ...

Low-power design - IEEE Xplore
tors, combine microcontroller architectures with some high- performance analog circuits, and are routinely produced in tens of millions per year with a power ...