Blending Subdivision Curves and Their Application
This paper designs two families of blending subdivisions based on known approximatory subdivisions. One is binary blending subdivision based on the cubic B-splines, the other is a ternary blending subdivision based on the ternary 3-point subdivision. The approach provides a way to control the length of generated curves by dynamically introducing a blending parameter. Continuity analysis based on generating functions demonstrates that two families of blending subdivision are continuous in most cases. As an application, a length-preserving subdivision strategy is also proposed in this paper.

3D Mesh Segmentation Using Mean-Shifted Vertex Curvature
Real world objects obtain from scanner often require preprocessing to guarantee good representation. The question arouses in this process that whether the pre-process being done is enough or not. The work described in this paper proposes a method to evaluate the subjective quality of mesh surface through machine learning. Several metrics based on both cross sections and mesh surface are proposed, then the measures are implemented as input in two-layer perceptron to simulate the subjective assessment. In contrast to contexts of mesh quality assessment, by utilizing learning method, results of our method gives lower error and more compact fitting to human perception.

A Learning-based Approach for Automated Quality Assessment of Computer-Rendered Image
Computer generated images are common in numerous computer graphics applications such as games, modeling, and simulation. There is normally a tradeoff between the time allocated to the generation of each image frame and and the quality of the image, where better quality images require more processing time. Specifically, in the rendering of 3D objects, the surfaces of objects may be manipulated by subdividing them into smaller triangular patches and/or smoothing them so as to produce better looking renderings. Since unnecessary subdivision results in increased rendering time and unnecessary smoothing results in reduced details, there is a need to automatically determine the amount of necessary processing for producing good quality rendered images. In this paper we propose a novel supervised learning based methodology for automatically predicting the quality of rendered images of 3D objects. To perform the prediction we train on a data set which is labeled by human observers for quality. We are then able to predict the quality of renderings (not used in the training) with an average prediction error of roughly 20%. The proposed approach is compared to known techniques and is shown to produce better results.

Alignment of 3D Building Models with Satellite Images Using Extended Chamfer Matching
Large scale alignment of 3D building models and satellite images has many applications ranging from realistic 3D city modeling to urban planning. In this paper, we address this problem by matching the 2D projection of the building roofs and detected edges of satellite images. To better handle noise and occlusions in alignment, the proposed approach seeks an optimal matching location using an extended Chamfer matching algorithm. In addition the proposed approach attempt to optimize the alignment within large region using a global constraint. We show that the proposed approach can estimate the the alignment of matching parts and produce robust result under occlusion. We test the proposed algorithm on two different datasets that covers the downtown areas of San Francisco and Chicago. The results show that the proposed algorithm significantly improves the registration accuracy while maintaining consistent performance.

Learning Based Roof Style Classification in 2D Satellite Images
Accurately recognizing building roof style leads to a much more realistic 3D building modeling and rendering. In this paper, we propose a novel system for image based roof style classification using machine learning technique. Our system is capable of accurately recognizing four individual roof styles and a complex roof which is composed of multiple parts. We make several novel contributions in this paper. First, we propose an algorithm that segments a complex roof to parts which enable our system to recognize the entire roof based on recognition of each part. Second, to better characterize a roof image, we design a new feature extracted from a roof edge image. We demonstrate that this feature has much better performance compared to recognition results generated by Histogram of Oriented Gradient (HOG), Scale-invariant Feature Transform (SIFT) and Local Binary Patterns (LBP). Finally, to generate a classifier, we propose a learning scheme that trains the classifier using both synthetic and real roof images. Experiment results show that our classifier performs well on several test collections.

Learning from Synthetic Data Using a Stacked Multichannel Autoencoder
Learning from synthetic data has many important and practical applications, An example of application is photosketch recognition. Using synthetic data is challenging due to the differences in feature distributions between synthetic and real data, a phenomenon we term synthetic gap. In this paper, we investigate and formalize a general framework – Stacked Multichannel Autoencoder (SMCAE) that enables bridging the synthetic gap and learning from synthetic data more efficiently. In particular, we show that our SMCAE can not only transform and use synthetic data on the challenging face-sketch recognition task, but that it can also help simulate real images, which can be used for training classifiers for recognition. Preliminary experiments validate the effectiveness of the framework.

Suburban ground LiDAR Segmentation
As part of a large-scale 3D modeling system for LiDAR data from city scene, we describe a pipeline for segmenting point clouds and extracting point clouds of building. Segmentation of point clouds plays a vital role in city modeling task because some further tasks can benefit from segmentation result, such as building texturing, building completion as well as reconstruction. A key contribution is the usage of machine learning technique in segmentation pipeline in combination with building footprint. Features of building are computed and extracted to distinguish building facade and other objects, to achieve this, features are well designed to better describe characteristics of building facade in terms of pattern repetition in local area and planarity. Test data is obtained from data collection vehicle that drive through the most crowded area in downtown Chicago and San Francisco, and we demonstrate that our method can achieve accuracy above 89%.

Structure Preserved Point Cloud Simplification
we propose a point cloud simplification approach which can preserve both local and global characteristics of the point cloud as it being simplified. The proposed approach preserves the information of point cloud by maximizing the similarity of the distribution of pairwise distances between original point cloud and simplified point cloud. Distribution of points' pairwise distances has been extensively used as a feature in 3D object recognition and information retrieve.

Map Visual Saliency Computation and Its Applications
Global nonresistance and normalization in large scale saliency computation is very important, especially when it comes to its application such as navigation. Generally, saliency result is different for an object in different context. However, in map application there should be a global saliency consistency for the same targets. Otherwise a very normal object will have a very high saliency when putting it in a different context, such as a palm tree in a boundless desert, thus it will result in a misleading in navigation. In the reference paper, since they use people's description to provide saliency, there is no way to exert global consistency in saliency computation. However in our approach, two data sources, both street view LiDAR and video, are corresponded frame by frame, so there is a way to find and reach global consistency in this case. Actually, we propose an easy and effective framework to normalize saliency among all data we collected.

Augmented reality
Instead of using 3D model in augmented reality setup, we propose a new approach that creates a 3D illusion using HD pictures and Kinect images taken from 360 degrees and different height levels around the product. The system configuration of data collection is very simple that we use calibrated digital camera and depth camera only. The entire procedure is automatic, thus it could be easy to employ the proposed approach to a great number of products. We show that when enough number of intermediate pictures of product could be generated, the proposed approach can provide almost equivalent user experience as provided by 3D model.

Cite Scale Building Facade Extraction and Streetview Reconstruction

CGMOS: Certainty Guided Minority OverSampling
Handling imbalanced datasets is a challenging problem that if not treated correctly results in reduced classification performance. Imbalanced datasets are commonly handled using minority oversampling, whereas the SMOTE algorithm is a successful oversampling algorithm with numerous extensions. SMOTE extensions do not have a theoretical guarantee to work better than SMOTE and in many instances their performance is data dependent. In this paper we propose a novel extension to the SMOTE algorithm with a theoretical guarantee for improved classification performance. The proposed approach considers the classification performance of both the majority and minority classes.New data points are added by considering certainty changes in the dataset. The paper provides a proof that the proposed algorithm is guaranteed to work better than SMOTE. Experimental results on 30 real-world datasets show that the proposed approach works better than existing algorithms when using different classifiers.

Learning Based Lecture Video Indexing
Lecture videos are common and increase rapidly. Consequently, automatically and efficiently indexing such videos is an important task. Video segmentation is a crucial step of video indexing that directly affects the indexing quality. We are developing a system for automated video indexing and in this paper discuss our approach for video segmentation and classification of video segments. The novel contributions in this paper are two fold. First we develop a dynamic Gabor filter and use it to extract features for video frame classification. Second, we propose a cascading algorithm that is capable of extracting index frames. The proposed approach results in both higher precision and recall compared with a commercial system demonstrate that the performance is significantly improved by using enhanced features and the cascading algorithm.

                    Email:     xzhang22@hawk.iit.edu
                    Address:  Rm 115, 3300 South Federal  St.
                                   Chicago, IL 60616