你好!欢迎来到北京图象图形学学会!
登录  |  注册
首页 >   > Program

APMAR2017-Program

来源:管理员  发布于2017-03-17


The final program can be download here.

Program.pdf


Data: July 2nd, Sunday

 

8:00-17:00

Welcome at International Education and Exchange Center of Beijing Institute of Technology

19:00

Preparatory   meeting “APMAR2018” (Just for APMAR members)

Data: July 3rd, Monday

Time

Title

Chair

7:50   8:10

Registration

8:10 – 8:30

Welcome

Yongtian Wang

8:30 – 9:10

Keynote Speech

Nassir Navab

9:10– 9:40

Coffee Break


Paper Session 1

Takafumi   Taketomi

9:40 – 9:55

A Multi-View Camera-Based   Diminished Reality for Work Area Visualization

Momoko   Maezawa, Shohei Mori and Hideo Saito

In   this paper, we present a method of diminished reality for providing a   transparent observation by diminishing the tools and hands for visualizing   the hidden work area using multiple cameras capturing the work area.

9: 55 – 10:10

Intuitive Visual Hints for   Guiding Head Movement in Learning Tai Chi Chuan with Head-Mounted Display

Ping-Hsuan   Han, Yilun Zhong, Han-Lei Wang, Ming-Sui Lee and Yi-Ping Hung

In   this paper, we propose three visual designs for guiding user’s head movement   in learning TCC with OST-HMD. We conduct a user study to compare and discuss   the differences of head orientation between the participants and the virtual   coach.

10:10 – 10:25

Indirect Augmented Reality   without Pre-capturing Target Environments

Kunat   Pipatanakul, Norihiko Kawai, Tomokazu Sato, Kiyoshi Kiyokawa and Naokazu   Yokoya

In   this study, we introduce a new IAR method which simultaneously creates a   panoramic image by stitching images captured by a monocular camera and superimposes   virtual objects to realize jitterless AR experience on demand.

10:25 – 10:40

Converting face sketches to   photo-realistic digital images

Youngju   Choi and Yongduek Seo

We   present our approach to generate photo-realistic digital images of faces based   on the inputs provided in the form of hand drawing. Our approach adopts   several stages of deep neural networks which includes modules of generative   adversarial networks.

10:40 – 10:55

An Augmented Reality Supports   for Self-learners Learning Activity Involving Motion: A Case Study on an   Alphabet Writing System

Yuya   Miyoshi, Yuji Oyamada, Aya Shiraiwa, Kazu Mishiba and Katsuya Kondo

In   this paper, we aim to provide further supports to self-learners who practice   to master some activity involving motion. For the purpose, we propose an AR   framework that visualizes the users what & how they should improve.

10:55 – 11:10

Influence on Weight Sensation   Caused by Visual Diminishing of Real Objects

Miho   Tanaka, Ayushi Misra, Kana Oshima, Satoshi Hashiguchi, Shohei Mori, Asako   Kimura, Fumihisa Shibata and Hideyuki Tamura

In   this study, we use MR-based visuo–haptic experiences to investigate the   mechanisms by which vision and haptics interact. In contrast to MR,   diminished reality (DR) can virtually erase an actual object from sight. We   also study the relationship between various ranges of DR-based visual effects   and haptic sensations using pole-shaped actual objects.


Paper Session 2

Yue Liu

11:10– 11:25

 

Depth Map Restoration via   Regularization in Curvelet Domain

Qirui   Zhang, Takafumi Taketomi, Alexander Plopski, Christian Sandor and Hirokazu   Kato

In   this work, we present a quantitative evaluation in order to verify   feasibility to achieving accurate depth map via regularization in curvelet   domain.

 

11:25 – 11:40

Gaze Depth Estimation for AR/VR   HMD

Youngho   Lee, Thammathip Piumsomboon, Gun Lee and Mark Billinghurst

In   this paper we describe a new method for determining gaze depth in a head   mounted eye-tracker. We have implemented a gaze depth tracker using the gaze   normal vector. We used a binocular gaze tracker with two eye cameras, and the   gaze vector was input to an MLP neural network for training and estimation.

11:40 – 11:55

Image   Matching between Cameras for Vision Augmentation HMDs

Ryosuke Goto, Jason Orlosky,   Photchara Ratsamee, Tomohiro Mashita, Yuki Uranishi, Kiyoshi Kiyokawa and   Haruo Takemura

In this study, we propose a   robust and accurate image matching method that calculates the corresponding   image areas between cameras at runtime for smooth and natural image   transitions, with vision augmentation HMDs in mind.

11:55 – 12:10

Determining Perceived Magnitude   with Respect to Intensity and Frequency Control of Ultrasound Transducers   Array

Tatyana   Ogay, Ahsan Raza and Seokhee Jeon

In   this paper, we have presented the procedures and results of two psychophysical   experiments by using ultrasonic transducers phased array.

12:10 – 12:25

An Automated Calibration Method   for Large Scale Projector-Camera System

Chun   Xie, Kenji Suzuki, Yoshinari Kameda and Itaru Kitahara

In   this paper, we purpose a method to perform projector image undistortion as   well as alignment automatically by utilizing projector-camera systems.

12:30 – 14:00

Lunch


Paper Session 3

Youngho Lee

14:00 – 14:15

Location estimation from   pre-recorded video taken by omnidirectional cameras

Yuta   Nagumo, Itaru Kitahara and Yoshinari Kameda

We   are studying a navigation system guided by a video of a planned route. This   paper proposes a position estimation method that uses omnidirectional   cameras. Position estimation is performed by retrieving a similar image among   prerecorded image database and a picture taken by a pedestrian. In this   paper, we improve the accuracy of the image retrieval by using two   omnidirectional cameras. Also, we discuss the method to cope with strong   light source. We evaluate the performance of the proposed method on a route   set in our campus.

14:15 – 14:30

An Application of Augmented   Haptics: Prostate Palpation Simulator with Realism

Aishwari   Talhan and Seokhee Jeon

In   this paper, we presented Augmented Haptics Prostate Palpation Simulator with   confirmed realism. In this paper, the realism, and high fidelity are   confirmed by human perception experiment with experienced medical   professionals.

14:30 – 14:45

Automated Backlight Modulation   of Optical See-through Head Mounted Displays Based on Users' Visibility   Evaluation

Chang   Liu, Alexander Plopski, Tomohiro Mashita, Yoshihiro Kuroda, Kiyoshi Kiyokawa   and Haruo Takemura

We   propose a novel approach to modulate the backlight luminance of an optical   see-through head mounted display (OST-HMD) in order to reach a well-balanced   visibility condition.

14:45 – 15:00

Multimodal Interaction Design   for Energy General Science Education based on Future Classroom Concepts

Sheng-Ming   Ryan Wang and Chieh-Ju Huang

At   the beginning, we proposed the overall learning mechanism for general science   education scenario, which is based on the four phases of FC development   concept that includes: Exploration, Experiment, Experience, and Empowerment.   Then, we integrated the FC user scenarios with ARCS motivation model and   design the persona, user journey map (CJM), and service blueprint (SB) for   the flow of service design (SD) implementation. Finally, we refer to the   content of Chinese ancient book “Classic of Mountain And Sea” to write the   story for developing a general science education animation “The Sound of   Geothermal”, and an ebook that operates based on Tangible Interaction Design.

15:00 – 15:15

Study of walking support method   in real space when immersed in virtual space

Kohei   Kanamori, Nobuchika Sakata, Tomu Tominaga, Yoshinori Hijikata and Kensuke   Harada

In   this paper, we propose two methods to support interaction with real world   while playing immersive VR game, even in walking and without reducing the   immersive feeling as much as possible. The first method is to superimpose 3D   point cloud of real space on the virtual space in HMD.

15:15 – 15:30

Structure from Motion with   Bezier-Splines for Hand-held Space Carving

Zhirui   Wang and Laurent Kneip

In   this work, we present a complete end-to-end pipeline which produces   meaningful dense 3D models from natural data: the target object is placed on   a structured but unknown planar background and the data is captured using   only a hand-held monocular camera. Our scientific contribution consists of   our background parametrization: We use Bezier splines to parametrize the   curves in the background and perform bundle adjustment, thus returning poses   that directly permit space carving.


Paper Session 4

Yi-Ping Hung

15:30 – 15:45

Applications of IoT and AR combination

Ryo   Akiyama, Alexander Plopski, Takafumi Taketomi, Christian Sandor, Hirokazu   Kato and Daniel Saakes

We   did a workshop about applications of Internet of Things(IoT) and Augmented   Reality(AR) combination for support human life. I will explain about it.

15:45 – 16:00

Gesture Recognition from Depth   Images with Hierarchical Hand Parsing

Menghsuan   Lin and Shang-Hong Lai

In   this paper, we present a gesture recognition algorithm based on hierarchical   hand parsing from a single depth image. According to hand configuration, we   propose to segment a hand into 11 non-overlapping parts with a novel 3-layer   hierarchical Random Decision Forest (RDF) per-pixel classifier.

16:00 – 16:15

Empowering a POB-Diminished   Reality Method to Handle Rigid Moving Objects with Real-time Observation

Masaru   Horita, Daiki Sakauchi, Shohei Mori, Sei Ikeda, Fumihisa Shibata, Asako   Kimura and Hideyuki Tamura

In   this study, we propose a novel DR method for synthesizing user’s viewpoint   images to reduce occlusion cracks and reproduce the specular appearance not   only a static scene but also a moving object by empowering the POB-DR with   ROB-DR.

16:15 – 16:30

Research on Equipment   Maintenance Guidance System and Demo Realization Based on Augmented Reality

Jiang   Yuan, Sun Han-Bing and Li Wei-Ke

Equipment   maintenance guidance system is essentially a kind of software system, in   order to make the final function of the system can better meet the needs of   the equipment maintenance, this paper studied the requirements of equipment   maintenance guidance system and built its structure, which laid a certain   foundation for the follow-up development of the system.

16:30 – 16:45

Assisting People with Hand   Tremor to Type Steadily on Keyboard Using Optical See-Through Mixed Reality

Wang   Kai, Daisuke Iwai and Kosuke Sato

This   research proposes an optical see-through mixed reality system that reduces   hand tremors in assisting the trembling hand sufferers to type steadily. The   system virtually stabilizes the trembling hand by optically overlapping the   trembling hand with a stabilized virtual hand to produce a realistic typing   sensation without any tremors. The simulation experiments proved that the   system supports trembling hand typing. By subjective investigations, we   confirmed that the system with a virtual:real intensity ratio of 0.75:0.25   was optimal.

16:50 – 17:20

Coffee Break

17:20 – 18:00

Closing Speech

Yongtian Wang

Data: July 4th, Tuesday

8:00 – 18:00

Town meeting

18:00 – 20:30

Return Beijing Institute of Technology