Program
Time Speakers Theme
08:40-09:20         Registration
09:20-09:40 Baoquan Chen Opening and Welcome Speech
09:40-10:05 Dani Lischinski Content-Aware Automatic Photo Enhancement
10:05-10:30 Ligang Liu Geometry Processing in 3D Printing
10:30-10:45         Coffee Break
10:45-11:10 Ying He Pre-computing Techniques for the Discrete Geodesic Problem
11:10-11:35 Yebin Liu Capture Close Interacting Motions
11:35-12:00 Yongliang Yang Urban Pattern: Layout Design by Hierarchical Domain Splitting
12:00-14:00         Lunch Break
14:00-14:25 Yingqing Xu Aesthetics Orientated Programmable Camera
14:25-14:50 Yangyan Li Analyzing Growing Plants from 4D Point Cloud Data
14:50-15:15 Ping Tan A Global Linear Method for Camera Pose Registration
15:15-15:40 Jin Huang

Singularity Structure Generation and Fixing in Hexahedral Remeshing

15:40-16:00         Coffee Break
16:00-16:25 Jue Wang Making Image Deblurring Practical
16:25-16:50 Hao Li 3D Human Capture for Everyone
16:50-17:15 Qixing Huang Cycle-consistent Regularization via Constrained Low-Rank Matrix Recovery
17:15-17:40 Johannes Kopf Scaling Tiny Images
17:40-18:05 Hui Huang Mind the Gap: Tele-Registration for Structure-Driven Image Completion
Speakers
Speaker Bio Abstract

 

Baoquan Chen

Baoquan Chen is a Professor of Shandong University, where he is also the Dean of School of Computer Science and Technology. Prior to the current post, he was the founding director of the Visual Computer Research Center, SIAT (2008-2013). His research interests generally lie in computer graphics, visualization, and human-computer interaction. Chen received an MS in EE from Tsinghua University (1994), and PhD in Computer Science from SUNY Stony Brook (1999). Chen is the recipient of the 2002 Microsoft Innovation Excellence Program, 2003 NSF CAREER award, 2004 McKnight Land-Grant Professorship at University of Minnesota, 2005 IEEE Visualization Best Paper Award 2005, and most recently, 2010 NSFC "Outstanding Young Researcher" program.

 

 

 

 

 

 

Opening and Welcome Speech

 

Dani Lischinski

DaniLischinski is a Professor at the School of Computer Science and Engineering at the Hebrew University of Jerusalem, Israel. He received his PhD from the Department of Computer Science and the Program of Computer Graphics at Cornell University in 1994, and then was a post-doctoral research associate at the Department of Computer Science and Engineering at the University of Washington until 1996. In 2002/3, he spent a sabbatical year at Pixar Animation Studios, and has spent several summers at Microsoft Research in Redmond. In 2012 he received the Eurographics Outstanding Technical Contributions Award.

Content-Aware Automatic Photo Enhancement Data

 

Abstract:

Automatic photo enhancement is one of the longstanding goals in image processing and computational photography. While a variety of methods have been proposed for manipulating tone and color, most automatic methods used in practice, operate on the entire image without attempting to take the content of the image into account. In this work we present a new framework for automatic photo enhancement that attempts to take local and global image semantics into account. Specifically, our content-aware scheme attempts to detect and enhance the appearance of human faces, blue skies with or without clouds, and underexposed salient regions. A user study was conducted that demonstrates the effectiveness of the proposed approach compared to existing auto-enhancement tools.

 

Ligang Liu

Ligang Liu is a professor at the University of Science and Technology of China. He received his B.Sc. in applied mathematics (1996) and his Ph.D. in computer aided geometric design and computer graphics (2001) from Zhejiang University, China. Between 2001 and 2004, he worked at Microsoft Research Asia as an associate researcher. Then he worked at Zhejiang University as an associate professor and professor during 2004 and 2012. He paid an academic visit to Harvard University during 2009 and 2011. His research interests include digital geometric processing, computer graphics, and image processing.

Geometry Processing in 3D Printing

 

Abstract:

3D printing is an emerging technology in manufacturing in recent years, which is a process of making a three-dimensional solid object of virtually any shape from a digital model. 3D printing is achieved using an additive process, where successive layers of material are laid down in different shapes and has been used for both prototyping and distributed manufacturing with applications in various fields. From the view of geometry processing, 3D models creation and processing is a criticallly important part in this technology. In this talk, I will introduce our recent works in geometry processing for 3D printing.

 

Ying He

Ying He received his BS and MS degrees in Electrical Engineering from Tsinghua University and PhD in Computer Science from Stony Brook University. He is currently an Associate Professor at School of Computer Engineering, Nanyang Technological University, Singapore. His research interests fall into the general areas of visual computing. He is particularly interested in the problems which require geometric analysis and computation.

Pre-computing Techniques for the Discrete Geodesic Problem

 

Abstract:

Computing geodesic on polygonal meshes is a fundamental problem in computer graphics and geometric modeling. However, the state-of-the-art methods (such as the MMP and ICH algorithms) are computationally expensive, which cannot be used for time-critical applications. In this talk, I will introduce two pre-computing techniques, geodesic triangle unfolding (GTU) and saddle vertex graph (SVG), which allows us to compute geodesic on large-scale models efficiently.

 

Yebin Liu

Yebin Liu received the BE degree from Beijing University of Posts and Telecommunications, China, in 2002, and the PhD degree from the Automation Department, Tsinghua University, Beijing, China, in 2009. He has been working as a research fellow at the computer graphics group of the Max Planck Institute for Informatik, Germany, in 2010. He is currently an associate professor in Tsinghua University. His research areas include computer vision and computer graphics.

Capture Close Interacting Motions

 

Abstract:

Close interacting motion is common in everyday life and important in graphics animation, user interaction, sport analysis, biomechanics and so on. Even for maker motion capture, it is challenging to have these motions, because of the serious occlusion, possible collision and the requirement for capture of subtle contact phenomena. In this talk, We will address the two typical cases : human body interacting with human body and hand interacting with object. We will show how "Analysis by Synthesis" can be modified to solve these two problems.

 

Yangyan Li

Yangyan Li is a postdoctoral scholar at Geometric Computation Group in Stanford University. Before that, Yangyan received his PhD degree from Shenzhen Institutes of Advanced Technology in 2013, and bachelor degree from Sichuan University in 2008. His primary research interests fall in the field of Computer Graphics and Computer Vision with an emphasis on point cloud processing.

Analyzing Growing Plants from 4D Point Cloud

 

Abstract:

Studying growth and development of plants is of central importance in botany. Current methods for quantitative analysis of such growth processes are either limited to tedious and sparse manual measurements, or coarse image-based 2D measurements. Availability of cheap and portable 3D acquisition devices has the potential to automate this process and easily provide scientists with volumes of accurate data, at a scale much beyond the realms of existing methods. However, during their development, plants grow new parts (e.g., vegetative buds) and bifurcate to different components—violating the central incompressibility assumption made by existing acquisition algorithms, which makes these algorithms unsuited for analyzing growth. We introduce a framework to study plant growth, particularly focusing on accurate localization and tracking topological events like budding and bifurcation. This is achieved by a novel forward-backward analysis, wherein we track robustly detected plant components back in time to ensure correct spatio-temporal event detection using a locally adapting threshold. We evaluate our approach on several groups of time lapse scans, often ranging from days to weeks, on a diverse set of plant species and use the results to animate static virtual plants or directly attach them to physical simulators.

 

Yingqing Xu

Dr. Ying-Qing Xu is currently a professor and chair of Department of Information Art & Design, Tsinghua University. Before joining Tsinghua University in October 2011, he was a lead researcher of Microsoft Research Asia where he had worked since January 1999, and a director of Microsoft Digital Cartoon and Animation Laboratory of Beijing Film Academy. Ying-Qing received his Sc. B from the Department of Mathematics of Jilin University, and his PhD in computer graphics from Academia Sinica (Beijing, 1997). He has co-authored over 70 papers in computer graphics, computer vision, interactive design, and e-heritage, as well as has over 20 granted and more pending US patents. His research interests are in information art design, natural user interface design, computer graphics, computer vision, e-Heritage, and virtual reality. He is a member of CAA (Chinese Artists Association), ACM (Association for Computing Machinery), ACM SIGGRAPH, also a senior member of IEEE (Institute of Electrical and Electronics Engineers), and CCF (China Computer Federation). He is a member of the academic committee of Shenzhen key laboratory for visual computing and analytics.

Aesthetics Orientated Programmable Camera

 

Abstract:

Since its birth, photography has been recognized as form of art, in which 3D sense is projected and recorded onto the 2D medium. But actually, there are many possibilities of extending the capturing, processing, and displaying. In this research project, we aim to change the historical nature of photography, and to develop a systematic photographical solution of enabling artistic expressions to more than just capture lighting of physical world. we will provide a novel picturing experience to enable people not only can staring the traditional content of captured picture, but also can learning rich content-correlative information and knowledge that are automatically annotated by our system. (This work is funded by Intel Corp.)

 

Ping Tan

Dr. Ping Tan is an assistant professor from the National University of Singapore. He received the Ph.D. degree in Computer Science & Engineering from the Hong Kong University of Science and Technology in 2007. Before that, he received the B.S. degree in Applied Mathematics and M.S. degree in Pattern Recognition and Intelligent System from Shanghai Jiao Tong University, China, in 2000 and 2003 respectively. Dr. Tan has served as an editorial member of the International Journal of Computer Vision (IJCV), an associate editor of the Machine Vision and Applications (MVA). He has served in the program committees of SIGGRAPH, SIGGRAPH Asia. Dr. Tan received the inaugural MIT TR35@Singapore award in 2012 (among 12 top innovators under 35 from Southeast Asia, Australia, and New Zealand), the Image and Vision Computing Outstanding Young Researcher Award Honorable Mention Award in 2012.

A Global Linear Method for Camera Pose Registration

 

Abstract:

We present a linear method for global camera pose registration from pairwise relative poses encoded in essential matrices. Our method minimizes an approximate geometric error to enforce the triangular relationship in camera triplets. This formulation does not suffer from the typical ‘unbalanced scale’ problem in linear methods relying on pairwise translation direction constraints, i.e. an algebraic error; nor the system degeneracy from collinear motion. In the case of three cameras, our method provides a good linear approximation of the trifocal tensor. It can be directly scaled up to register multiple cameras. The results obtained are accurate for point triangulation and can serve as a good initialization for final bundle adjustment. We evaluate the algorithm performance with different types of data and demonstrate its effectiveness. Our system produces good accuracy, robustness, and outperforms some well-known systems on efficiency.

 

Jin Huang

Jin Huang received the PhD degree from the Computer Science Department, Zhejiang University in 2007 with Excellent Doctoral Dissertation Award of the China Computer Federation (CCF). He is an associate professor in the State Key Laboratory of CAD & CG at Zhejiang University, P.R. China, and now a visiting faculty in Caltech. His research interests include geometry processing and physically based simulation. He has served as a reviewer for ACM SIGGRAPH, ACM SIGGRAPH ASIA, TVCG, EuroGraphics etc.

Singularity Structure Generation and Fixing in Hexahedral Remeshing

 

Abstract:

Because of its highly regular structure, hexahedral mesh is has been widely used in physical based simulation and other applications. However, automatic high quality hexahedral remeshing is extremely difficult, and has sometimes been dubbed the “Holy Grail” in the meshing community. The most challenge point is to automatically generate correct singularities according to various requirements. This talk will provide an overview of hexahedral remeshing, and introduce our recent progress on several key problems. For singularity structure generation, we propose spherical harmonic based method and l1 based method according to the cubic-symmetry property in hexahedral meshes. I will also introduce our equation graph based method for local and global singularity defects detection and fixing.

 

Qixing Huang

Qixing(Peter) Huang is a postdoc researcher at Stanford University. His research interests include data-driven geometry processing and co-analysis of shapes and collections of 3D models using convex optimization techniques. He was a winner of the Best Paper Award from SGP 2013 and the Most Cited Paper Award for the journal Computer-Aided Geometric Design in 2011 and 2012.

Cycle-consistent Regularization via Constrained Low-Rank Matrix Recovery

 

Abstract:

In this talk, I will present a theoretical framework for consistent shape matching, which is guaranteed to recover the ground truth maps even in presence of a constant fraction of incorrect correspondences. The framework establishes the equivalence between cycle-consistency and the fact that the binary matrix that stores pair-wise maps in blocks is low-rank and semi-positive definite. This leads to a constrained low-rank matrix recovery formulation, which admits strong semidefinite programming (SDP) relaxation. I will show novel ways both to derive exact recovery conditions and to solve the SDP. The approach outperforms state-of-the-art joint shape matching techniques on benchmark datasets, and has plenty of applications in graphics and vision.

 

Jue Wang

Jue Wang is currently a Senior Research Scientist at Adobe Research. He received his B.E. and M.Sc. from Department of Automation, Tsinghua University, Beijing, China, and his Ph.D (2007) in Electrical Engineering from the University of Washington, Seattle, WA, USA. He received Microsoft Research Fellowship and Yang Research Award from University of Washington in 2006. He joined Adobe Research in 2007 as a research scientist. His research interests include image and video processing, computational photography, computer graphics and vision. He is a senior member of IEEE and member of ACM.

Making Image Deblurring Practical

 

Abstract:

We have been working on the image deblurring problem in the last two years with the goal of making it practical enough for a commercial product. We did it earlier this year, by releasing our deblurring technique as the Shake Reduction feature in Photoshop CC. During the process we have identified some unsolved yet important problems in image deblurring that have been largely ignored by previous research, and came up some interesting solutions to tackle them. In this talk I will first briefly review some of these problems and solutions, then talk about our recent paper submission on how to leverage light streaks for image deblurring in more detail. 

 

Hao Li

Hao Li recently joined the University of Southern California as atenure-track assistant professor in CS. Before he went back toacademia, he was a research lead at Industrial Light & Magic, where hedeveloped the next generation real-time performance capturetechnologies for the upcoming Star Wars episodes. Prior to joining theforce, Hao spent a year as a postdoctoral researcher at Columbia andPrinceton Universities. His research lies in geometry processing, 3Dreconstruction, and performance capture. While primarily developed toshift the traditional VFX pipeline to a real-time pre-productionworkflow, his work on markerless dynamic shape reconstruction has alsoimpacted the field of human shape analysis and biomechanics. Hisalgorithms are widely deployed in the industry, ranging from leadingvisual effects studios to manufacturers of state-of-the-art radiationtherapy systems. He has been named this year's top 35 innovator under35 by MIT Technology Review. He was also awarded the SNF Fellowshipfor prospective researchers in 2011 and best paper award at SCA 2009.He obtained his PhD from ETH Zurich in 2010 and received his MScdegree in Computer Science in 2006 from the University of Karlsruhe(TH). He was a visiting researcher at EPFL in 2010, Industrial Light &Magic (Lucasfilm) in 2009, Stanford University in 2008, and many otherplaces.

3D Human Capture for Everyone

 

Abstract:

In this talk, I will present three state-of-the-art techniques(published at Siggraph and Siggraph Asia) that make 3D digitization ofhumans and real-time performance capture more accessible for everyone.I will begin with the introduction of some fundamental techniques forprocessing captured geometry and briefly introduce (1) a novel systemfor 3D self-portraiture using a single static Kinect. I will thenpresent a state-of-the-art real-time facialtracking system that doesnot require expression calibration but achieves superior trackingfidelity over existing methods (more accurate emotions and dialogues).Finally, I will give an overview of a multi-view stereo capture systemfor 3D hair reconstruction and discuss about open problems and thefuture of depth sensing technologies.

 

Yongliang Yang

Yong-Liang Yang received his Bachelor's and Ph.D. degrees in computer science from Tsinghua University in 2004 and 2009, respectively. He is currently a research scientist in Geometric Modeling and Scientific Visualization Center, King Abdullah University of Science and Technology.  His research interests include computer graphics, geometric modeling, and geometry processing.

 

Urban Pattern: Layout Design by Hierarchical Domain Splitting

 

Abstract:

We present a framework for generating street networks and parcel layouts. Our goal is the generation of high-quality layouts that can be used for urban planning and virtual environments. We propose a solution based on hierarchical domain splitting using two splitting types: streamline-based splitting, which splits a region along one or multiple streamlines of a cross field, and template-based splitting, which warps pre-designed templates to a region and uses the interior geometry of the template as the splitting lines. We combine these two splitting approaches into a hierarchical framework, providing automatic and interactive tools to explore the design space.

 

Johannes Kopf

Johannes Kopf is a researcher in the Interactive Visual Media group at Microsoft Research in Redmond. Before joining MSR he obtained a PhD in computer science at the University of Konstanz. His research interests span a variety of topics in computer graphics, vision, and related fields. Most recently, he has worked on computational photography, image-based rendering, and vectorization. He recently received the 2013 Eurographics Young Researcher Award.

Scaling Tiny Images

 

Abstract:

I will talk about recent work on scaling normal size images down to tiny thumbnails/icons/pixel art, as well as inverting the process, i.e. extracting high resolution images from a tiny inputs. The key idea in the downscaling process is to optimize the shape and locations of downsampling kernels in a content-adaptive manner to better align with local image features. We optimize these kernels to represent the input image well, by finding an output image from which the input can be well reconstructed. This enables our algorithm to produce sharper results (without ringing artifacts) and preserve image features better than existing content-agnostic scaling filters. In the second part I will talk about a recent algorithm for inverting the above process, i.e. recovering a high resolution image from very small inputs like icons or pixel art, where features are at the scale of a single pixel. The key challenge lies in interpreting the image and extracting as much information as possible. In particular, resolving the connectedness/separation ambiguity of diagonal neighbors in the pixel lattice.

 

Hui Huang

Hui Huang is a professor in the Shenzhen Institutes of Advanced Technology (SIAT), Chinese Academy of Sciences (CAS). She now directs the Visual Computing Research Center (VCC). She received her PhD in Applied Math from the University of British Columbia (Canada) in 2008 and another PhD in Computational Math from Wuhan University (China) in 2006. Her research nterests include Computer Graphics, Point-based Modeling, Image Processing and Scientific Computing. She is the recipient of the 2012 Lujiaxi Young Talent Award of the Chinese Academy of Sciences and 2011&2013 Peacock Talent Award of Shenzhen.

Mind the Gap: Tele-Registration for Structure-Driven Image Completion

 

Abstract:

Concocting a plausible composition from several non-overlapping image pieces, whose relative positions are not fixed in advance and without having the benefit of priors, can be a daunting task. Here we propose such a method, starting with a set of sloppily pasted image pieces with gaps between them. We first extract salient curves that approach the gaps from non-angential directions, and use likely correspondences between pairs of such curves to guide a novel tele-registration method that simultaneously aligns all the pieces together. A structure-driven image completion technique is then proposed to fill the gaps, allowing the subsequent employment of standard in-painting tools to finish the job.