Tutorials
Tutorials
ISMAR 2020 Tutorial
OpenARK — Tackling Augmented Reality Challenges via an Open- Source Software Development Kit
Presenters: Allen Yang, Adam Chang, and Mohammad Keshavarzi
Abstract
This tutorial is a revised and updated edition of the first OpenARK tutorial presented at ISMAR 2019. The aim of this tutorial is to present an open-source augmented reality development kit, called OpenARK. OpenARK was founded by Dr. Allen Yang at UC Berkeley in 2015. Since then, the project has received high-impact awards and visibility. Currently OpenARK is being used by several industrial alliances including HTC Vive, Siemens, Ford, and State Grid. In 2018, OpenARK won the only Mixed Reality Award at the Microsoft Imagine Cup Global Finals. In the same year in China, Open-ARK also won a Gold Medal at the Internet+ Innovation and Entrepreneurship Competition, the largest such competition in China. OpenARK currently also receives funding support from a research grant by Intel RealSense project and the ONR. OpenARK includes a multitude of core functions critical to AR developers and future products. These functions include multi-modality sensor calibration, depth-based gesture detection, depth-based deformable avatar tracking, and SLAM and 3D reconstruction. All functions are based on state-of-the-art real-time algorithms and are coded to be efficient on mobile-computing platforms.
Another core component in OpenARK is its open-source depth perception data bases. Currently we have made two unique databases available to the community, one on depth-based gesture detection and the other on mm-accuracy indoor and outdoor large-scale scene geometry models and AR attribute labeling. We would like to overview our effort in the design and construction of these databases that potentially could benefit the community at large. Finally, we will discuss our effort in making depth-based perception easily accessible to application developers, who may not have and should not be forced to learn good understanding about 3D point cloud and reconstruction algorithms. The last core component of OpenARK is an interpreter of 3D scene layouts and its compatible AR attributes based on generative design principles first invented for creating architectural design layouts. We will discuss the fundamental concepts and algorithms of generative design and how it can be used to interpret common 3D scenes and their attributes for intuitive AR application development.
Videos
OpenARK Introduction Part 1
by Allen Yang
OpenARK Introduction Part 2 by Allen Yang
SLAM and 3D Reconstruction Part 1 by Adam Chang
3D Reconstruction (Part 2) by Adam Chang
OpenARK Installation Part 3 by Adam Chang
Generative Modeling in Spatial Computing by Mohammad Keshavarzi
Generative Modeling in Spatial Computing by Mohammad Keshavarzi
ISMAR 2019 Tutorial
OpenARK — Tackling Augmented Reality Challenges via an Open- Source Software Development Kit
Presenters: Allen Y. Yang, Luisa Caldas, Woojin Ko, and Joseph Menke
Abstract
The aim of this tutorial is to present an open-source augmented reality development kit, called OpenARK. OpenARK was founded by Dr. Allen Yang at UC Berkeley in 2015. Since then, the project has received high-impact awards and visibility. Currently, OpenARK is being used by several industrial alliances including HTC Vive, Siemens, Ford, and State Grid. In 2018, OpenARK won the only Mixed Reality Award at the Microsoft Imagine Cup Global Finals. In the same year in China, OpenARK also won a Gold Medal at the Internet+ Innovation and Entrepreneurship Competition, the largest such competition in China. OpenARK currently also receives funding support from a research grant by Intel RealSense project and the NSF.
OpenARK includes a multitude of core functions critical to AR developers and future products. These functions include multi-modality sensor calibration, depth-based gesture detection, depth-based deformable avatar tracking, and SLAM and 3D reconstruction. All functions are based on state-of-the-art real-time algorithms and are coded to be efficient on mobile-computing platforms.
Another core component in OpenARK is its open-source depth perception databases. Currently we have made two unique databases available to the community, one on depth-based gesture detection and the other on mm-accuracy indoor and outdoor large-scale scene geometry models and AR attribute labeling. We would like to overview our effort in the design and construction of these databases that potentially could benefit the community at large.
Finally, we will discuss our effort in making depth-based perception easily accessible to application developers, who may not have and should not be forced to learn good understanding about 3D point cloud and reconstruction algorithms. The last core component of OpenARK is an interpreter of 3D scene layouts and its compatible AR attributes based on generative design principles first invented for creating architectural design layouts. We will discuss the fundamental concepts and algorithms of generative design and how it can be used to interpret common 3D scenes and their attributes for intuitive AR application development.
The teaching material of this tutorial will be drawn from a graduate-level advanced topics course on AR/VR offered at UC Berkeley for the past three years.
Agenda and Slides
Session 1: OpenARK –Tackling AR Challenges via Open-Source Development Kit by Allen Yang
Session 2: OpenARK – Open Source Augmented Reality by Joe Menke
Session 3: Optimization and Manipulation of Contextual Mutual Spaces for Multi-User Virtual and Augmented Reality Interaction by Woojin Ko.