zondag 7 maart 2010

Android and Augmented Reality I

Hi everybody,

i think in the last decade, one of the most exciting trend in Mobile development world was augmented reality. Today i will try to explain how augmented reality works and how to write your very own augmented reality engine. This article will be in three parts:

  1. The first chapter will cover creating surfaceView, placing the camera preview on the surface and adding one more transparant view on the surface.
  2. Explanation about getting latitude, longitude, azimuth and pitch and some more information about sensors.
  3. This chapter covers openGL, calculating the x, y and z indexes of a shape and translate it's 3d position on the camera view.
We will use the following permissions;


But just the camera permission is needed for the first chapter. So there are three important components in our activity. First, surfaceView, second surfaceViewListener and third the camera. We don't have to specify a callback for camera since we never will take a picture. We are going to create a surface, listen it and once the surface is created set the surface as preview display of the camera.

So this is how our activiy will look like;

public class AugmentedReality extends Activity{
private View mainView;
private View appView;
private MyCameraView cv;

public void onCreate(Bundle savedInstanceState) {
//as eveybody says, this will run first :)
//the main container for the application. We will overlay two views here
//this element should be defined in the XML file
mainView = findViewById(R.id.main);
//create another transparant view
appView = new View(this.getApplicationContext());
//no background color and resource so the holder of the view will be totally invisible
//create a local variable for the camerra view
//we will define this class in the activity
cv = new MyCameraView(this.getApplicationContext());
//add it to the main view
//now we add the second view so it will take the highest z index automatically
//now we are outside of onCreate.
//lets define the CameraView
public class MyCameraView extends SurfaceView
//lets build the constructor
public CustomCameraView(Context ctx)
//we get the holder
previewHolder = this.getHolder();
//set the type
//and add the listener which is defined below
//now we are going to define the listener
//and trigger the camera preview once the surface is visually created
SurfaceHolder.Callback surfaceHolderListener = new SurfaceHolder.Callback() {
private Camera camera;
public void surfaceCreated(SurfaceHolder holder) {
//so the surface is created, we can open the camera
camera = Camera.open();
Parameters params = camera.getParameters();
//we set the format of the preview
//so it will be encoded
//if you get colored horizontal lines instead of preview
//this is the reason
//we don't have to define the format of the taken picture
//since we are not going to take a picture

//finally setthe new parameters
try {
//set the surface where the preview should be displayed
} catch (IOException e1) {
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height)
//you can add your own function here
//this event is fired when any value of the surface is changed
//for example changing to landscape mode automatically

public void surfaceDestroyed(SurfaceHolder arg0)
//ok when the application is shutdown, we release the camera
//so other applications can use it too

This code will give us the preview of the camera on a surface. In the next chapter we will dive into sensors and retrieve the needed data like latitude, longitude, azimuth and pitch. Thanks for reading.

Geen opmerkingen:

Een reactie posten