Wednesday, 16 December 2009

Android, Reflections with Bitmaps


Looking to add some polish to you're applications. Maybe enough to see a bit of a reflection? ;)  Lots of applications use the effect of creating a reflection of the original image, one of those being the coverflow view on the iphone. It's a nice bit of presentation polish to add to your UI, when you know how is not difficult to implement. In Android you'll need to make use of a number of classess in the graphics package such as Canvas, Matrix, Bitmap and a few others.
So without further ado, here's the source code of how to create an Image with a reflection
package com.example.reflection;


import android.app.Activity;
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.graphics.Canvas;
import android.graphics.LinearGradient;
import android.graphics.Matrix;
import android.graphics.Paint;
import android.graphics.PorterDuffXfermode;
import android.graphics.Bitmap.Config;
import android.graphics.PorterDuff.Mode;
import android.graphics.Shader.TileMode;
import android.os.Bundle;
import android.view.WindowManager.LayoutParams;
import android.widget.ImageView;
import android.widget.LinearLayout;

public class Reflection extends Activity {
    /** Called when the activity is first created. */
    @Override
    public void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        
        //The gap we want between the reflection and the original image
        final int reflectionGap = 4;
        
        //Get you bit map from drawable folder
        Bitmap originalImage = BitmapFactory.decodeResource(getResources(), 
                R.drawable.killers_day_and_age);
        
        int width = originalImage.getWidth();
        int height = originalImage.getHeight();
        
       
        //This will not scale but will flip on the Y axis
        Matrix matrix = new Matrix();
        matrix.preScale(1, -1);
        
        //Create a Bitmap with the flip matix applied to it.
        //We only want the bottom half of the image
        Bitmap reflectionImage = Bitmap.createBitmap(originalImage, 0, height/2, width, height/2, matrix, false);
        
            
        //Create a new bitmap with same width but taller to fit reflection
        Bitmap bitmapWithReflection = Bitmap.createBitmap(width 
          , (height + height/2), Config.ARGB_8888);
      
       //Create a new Canvas with the bitmap that's big enough for
       //the image plus gap plus reflection
       Canvas canvas = new Canvas(bitmapWithReflection);
       //Draw in the original image
       canvas.drawBitmap(originalImage, 0, 0, null);
       //Draw in the gap
       Paint deafaultPaint = new Paint();
       canvas.drawRect(0, height, width, height + reflectionGap, deafaultPaint);
       //Draw in the reflection
       canvas.drawBitmap(reflectionImage,0, height + reflectionGap, null);
       
       //Create a shader that is a linear gradient that covers the reflection
       Paint paint = new Paint(); 
       LinearGradient shader = new LinearGradient(0, originalImage.getHeight(), 0, 
         bitmapWithReflection.getHeight() + reflectionGap, 0x70ffffff, 0x00ffffff, 
         TileMode.CLAMP); 
       //Set the paint to use this shader (linear gradient)
       paint.setShader(shader); 
       //Set the Transfer mode to be porter duff and destination in
       paint.setXfermode(new PorterDuffXfermode(Mode.DST_IN)); 
       //Draw a rectangle using the paint with our linear gradient
       canvas.drawRect(0, height, width, 
         bitmapWithReflection.getHeight() + reflectionGap, paint); 
       
       //Create an Image view and add our bitmap with reflection to it
       ImageView imageView = new ImageView(this);
       imageView.setImageBitmap(bitmapWithReflection);
       
       //Add the image to a linear layout and display it
       LinearLayout linLayout = new LinearLayout(this); 
       linLayout.addView(imageView, 
               new LinearLayout.LayoutParams( 
                           LayoutParams.FILL_PARENT, LayoutParams.FILL_PARENT 
                     ) 
             ); 
             
        // set LinearLayout as ContentView 
        setContentView(linLayout); 
    }
}
I've added plenty of comments in here, more than I would normally, so I think most of the code is fairly self explanitory. But just in case here's the basic steps that we need to create an image with a reflection.

  1. First thing is to load the bitmap in using the Bitmap factory. I'm loading in a killers album cover, but you'll need to change this to load in your own jpg , png whatever..
  2. We then create a new Matrix class which we are going to use to perform a transformation on our original image. Remember the matrix class is used to perform transformations on bitmaps such as translate (i.e move it) , scale (change the size) and rotate. In this case we use scale, but we are not actually change the size of the image, since we specify a scale of 1. The interesting point here is that the y scale is negative, which takes all y co-ordinates in our bit map and makes them negative. This has the effect of flipping and image through it's Y axis.
  3. Once we have our matrix we pass it to createBitmap to create a flipped bitmap of the original image. Notice that we also specify that this new Bitmap should only be the bottom half of the original image.
  4. So now we have two Bitmaps, our original and our reflection. What I want to do now is join these together in one Bitmap. To do this I create a new Bitmap that is big enoguh to contain both the original bitmaps. I called this bitmapWithReflection. When this is first created it is just empty.
  5. Next step is to create a canvas and give it the bitmapWithReflection bitmap.
  6. So next we draw in the orignal bitmap using canvasDraw.
  7. Then add a small gap between the images using drawRect.
  8. Then add the reflection, again using drawBitmap.
  9. We now have the fully formed Bitmap but to make the reflection look convincing we need to fade out the bottom half of the image. To do this we need to apply an alpha Linear gradient. The alpha value for bits determines how opaque or transparent they are.
  10. So we create a linear gradient that is the size of the reflection part of the image. Add give it an alpha gradient ranging from 0x70 to 0x00.
  11. We then set up a Paint object to have the Linear Gardient and also a transfer Mode of Porter Duff with the Porter Duff mode set to Mode.DST_IN. The Poter Duff mode allows us to merge images together according to a set of rules, Porter Duff supports different rules which define the resulting output of the two merged images. For more information see here.
  12. Once we have our paint object setup we draw a rectangle over the reflection part of the image using the Paint. Becuase we have set the XferMode and shader to linear gradient this will give us the fade out effect we want for our reflection
  13. Last we just add our new Bitmap to an Imageview and then put this is a Linear layout.
So that's it, pretty straight forward really. Sometimes it's the little things that can help your UI look and feel professional. Finally here's an example of using these reflections in the Gallery Component:


Tuesday, 24 November 2009

Android transitions, Views and Activities

When I talk about transitions in Android I'm referring to the Animations that link Views or Activities together. So when a user navigates from one View or Activity to another, they may see the present widgets slide to the left and the new ones slide in from the right, or the old widgets might fade out while the new ones fade in. In a post I wrote a while back I outlined how to implement a slide in slide out animation with Views, using the android ViewFlipper component and a number of translate animations. At the time I mentioned that this transitions animation was for views only and that there was a global control in the settings menu for enabling transitions for activities. At the time there did not seem to be a way of control the transitions for activities on a per Activity basis. That was back in Android 1.5, but since then things have move on some. With Android 2.0 it is now possible to control the transition animations for activities. These transitions can be enabled or disable on a per activity basis and it is also possible to set the in and out animation for each activity.

The first thing you need to know to override the activity transition is that there is a new function call in the Activity class called overridePendingTransistion. This function takes two arguments, the resource ID of the enter, or in, animation and the resource id of the exit, or out, animation. Also there is a new flag in the Intent class called FLAG_ACTIVITY_NO_ANIMATION, which gives control over which Activities transition Animations are shown.

So by using the new flag and the new function call it is now possible to control the transitions of your Activities in the same way as your Views. 

Android renderscript, more info' and an example application

Here's a few more snippets of information I've managed to uncover on android renderscript:
  • Compiled on the device
  • Uses acc compiler
  • No architectural support issues.
  • No external libraries
  • You cannot #include
  • No allocation allowed
  • Safe predictable
I admit some of these statements don't give too much away, but there's not too much else to go on at the moment. It does seem to point to a language that is compiled and is C like, but maybe with not all the power of C as allocations are not allowed.


Trying to shed some more light on things I had another look in the source code and found a few simple example Android applications usingrenderscript One of these simple example applications was called Fountain , it seemed to be one of the simplest applications so i thought it would be a good place to start.


The Android Fountain renderscript example 
What does this application do? Well I'm not totally sure because i haven't run it myself, and to be honest there are not many comments in the code, so it really is a case of read the code and work it out. My best guess is that it produces a Fountain like animation that randomly animates points on a screen generally moving up and outwards in a fountain like way. These animations start when a users touches the screen and originate from that point. This is my best guess at the moment from what I can make out in the code example.


OK so what does the code look like?. First lets have a look at the files and how they are layed out, the structure is :
  • Android.mk
  • AndroidManifest.xml
  • res
    • drawable
      • gadgets_clock_mp3.png
    • raw
      • fountain.c
  • src
    • com
      • android
        • fountain
        • Fountain.java
        • FountainRS.java
        • FountainView.java
Most of what we see here is a standard looking Android application. We have the basic android files such as the AndroidManifest file. Then we have our src directory that contains our android application source files, also we have the res directory, not unsual as it contains the drawable and raw directories, but as you can see the raw directory contains one very interesting and rather unusual file, the fountain.c file. This, it seems, is where therenderscript code resides, and the name does indeed suggest that it is  C code file. So lets have a look at what is contained in this file:
// Fountain test script
#pragma version(1)

int newPart = 0;

int main(int launchID) {
    int ct;
    int count = Control->count;
    int rate = Control->rate;
    float height = getHeight();
    struct point_s * p = (struct point_s *)point;

    if (rate) {
        float rMax = ((float)rate) * 0.005f;
        int x = Control->x;
        int y = Control->y;
        char r = Control->r * 255.f;
        char g = Control->g * 255.f;
        char b = Control->b * 255.f;
        struct point_s * np = &p[newPart];

        while (rate--) {
            vec2Rand((float *)np, rMax);
            np->x = x;
            np->y = y;
            np->r = r;
            np->g = g;
            np->b = b;
            np->a = 0xf0;
            newPart++;
            np++;
            if (newPart >= count) {
                newPart = 0;
                np = &p[newPart];
            }
        }
    }

    for (ct=0; ct < count; ct++) {
        float dy = p->dy + 0.15f;
        float posy = p->y + dy;
        if ((posy > height) && (dy > 0)) {
            dy *= -0.3f;
        }
        p->dy = dy;
        p->x += p->dx;
        p->y = posy;
        p++;
    }

    uploadToBufferObject(NAMED_PartBuffer);
    drawSimpleMesh(NAMED_PartMesh);
    return 1;
}
Yes, it is very C like. We have structs, pointers and chars. Starting at the top of the file. We have a Control structure or class that gives use a rate and a count as well as x,y and r,g,b values. Where does this Control structure get instantiated? I'll come back to this. Another structure that is also used in this code is point_s. This structure has a x and y coordinates, r,g,b values, which are likely to be Red, blue green, and an "a" value which is the alpha value. Without more information I cannot be sure exactly what is happening in this code, but I think that generally an array of points is given and then array of new points is generated, to allow some kind of animation.


Looking at the src code directory we have three .java files. Fountain.java, FountainView.java and FountainRS.java. Fountain.java is just the basic Android activity class that has an onCreate method that sets the contentView to an instance of FountainView. The Code for the FountainView.java file looks like this:
/*
 * Copyright (C) 2008 The Android Open Source Project
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *      http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

package com.android.fountain;

import java.io.Writer;
import java.util.ArrayList;
import java.util.concurrent.Semaphore;

import android.renderscript.RSSurfaceView;
import android.renderscript.RenderScript;

import android.content.Context;
import android.content.res.Resources;
import android.graphics.Bitmap;
import android.graphics.drawable.BitmapDrawable;
import android.graphics.drawable.Drawable;
import android.os.Handler;
import android.os.Message;
import android.util.AttributeSet;
import android.util.Log;
import android.view.Surface;
import android.view.SurfaceHolder;
import android.view.SurfaceView;
import android.view.KeyEvent;
import android.view.MotionEvent;

public class FountainView extends RSSurfaceView {

    public FountainView(Context context) {
        super(context);
        //setFocusable(true);
    }

    private RenderScript mRS;
    private FountainRS mRender;

    private void destroyRS() {
        if(mRS != null) {
            mRS = null;
            destroyRenderScript();
        }
        java.lang.System.gc();
    }

    public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) {
        super.surfaceChanged(holder, format, w, h);
        destroyRS();
        mRS = createRenderScript(false, true);
        mRender = new FountainRS();
        mRender.init(mRS, getResources(), w, h);
    }

    public void surfaceDestroyed(SurfaceHolder holder) {
        // Surface will be destroyed when we return
        destroyRS();
    }



    @Override
    public boolean onTouchEvent(MotionEvent ev)
    {
        int act = ev.getAction();
        if (act == ev.ACTION_UP) {
            mRender.newTouchPosition(0, 0, 0);
            return false;
        }
        float rate = (ev.getPressure() * 50.f);
        rate *= rate;
        if(rate > 2000.f) {
            rate = 2000.f;
        }
        mRender.newTouchPosition((int)ev.getX(), (int)ev.getY(), (int)rate);
        return true;
    }
}
The FountainView class is an Android View. As you can see from the code, FountainView extends a new type of Android view RSSufraceView. It also has references to a new RenderScript class and our defined FountainRS class. When creating a new surface in the surfacedChanged method a new RenderScript class and a new FountainRS class are created. We also call the init method on the FountainRS class and pass in several arguments, including a reference to the RenderScript object. So lets have a look at the FountainRS.java file:
/*
 * Copyright (C) 2008 The Android Open Source Project
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *      http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

package com.android.fountain;

import android.content.res.Resources;
import android.renderscript.*;
import android.util.Log;


public class FountainRS {
    public static final int PART_COUNT = 20000;

    static class SomeData {
        public int x;
        public int y;
        public int rate;
        public int count;
        public float r;
        public float g;
        public float b;
    }

    public FountainRS() {
    }

    public void init(RenderScript rs, Resources res, int width, int height) {
        mRS = rs;
        mRes = res;
        initRS();
    }

    public void newTouchPosition(int x, int y, int rate) {
        if (mSD.rate == 0) {
            mSD.r = ((x & 0x1) != 0) ? 0.f : 1.f;
            mSD.g = ((x & 0x2) != 0) ? 0.f : 1.f;
            mSD.b = ((x & 0x4) != 0) ? 0.f : 1.f;
            if ((mSD.r + mSD.g + mSD.b) < 0.9f) {
                mSD.r = 0.8f;
                mSD.g = 0.5f;
                mSD.b = 1.f;
            }
        }
        mSD.rate = rate;
        mSD.x = x;
        mSD.y = y;
        mIntAlloc.data(mSD);
    }


    /////////////////////////////////////////

    private Resources mRes;

    private RenderScript mRS;
    private Allocation mIntAlloc;
    private SimpleMesh mSM;
    private SomeData mSD;
    private Type mSDType;

    private void initRS() {
        mSD = new SomeData();
        mSDType = Type.createFromClass(mRS, SomeData.class, 1, "SomeData");
        mIntAlloc = Allocation.createTyped(mRS, mSDType);
        mSD.count = PART_COUNT;
        mIntAlloc.data(mSD);

        Element.Builder eb = new Element.Builder(mRS);
        eb.addFloat(Element.DataKind.USER, "dx");
        eb.addFloat(Element.DataKind.USER, "dy");
        eb.addFloatXY("");
        eb.addUNorm8RGBA("");
        Element primElement = eb.create();


        SimpleMesh.Builder smb = new SimpleMesh.Builder(mRS);
        int vtxSlot = smb.addVertexType(primElement, PART_COUNT);
        smb.setPrimitive(Primitive.POINT);
        mSM = smb.create();
        mSM.setName("PartMesh");

        Allocation partAlloc = mSM.createVertexAllocation(vtxSlot);
        partAlloc.setName("PartBuffer");
        mSM.bindVertexAllocation(partAlloc, 0);

        // All setup of named objects should be done by this point
        // because we are about to compile the script.
        ScriptC.Builder sb = new ScriptC.Builder(mRS);
        sb.setScript(mRes, R.raw.fountain);
        sb.setRoot(true);
        sb.setType(mSDType, "Control", 0);
        sb.setType(mSM.getVertexType(0), "point", 1);
        Script script = sb.create();
        script.setClearColor(0.0f, 0.0f, 0.0f, 1.0f);

        script.bindAllocation(mIntAlloc, 0);
        script.bindAllocation(partAlloc, 1);
        mRS.contextBindRootScript(script);
    }

}
I'm not going to try and go through every detail of this code, but it seems that the interesting areas are in the initRS function. Here we have element builders, Simple Mesh builders and last but not least a script builder. Here we get a script builder instance, set the script to the fountain.c file. Set up some types such as Control and point (remember, these were used in the fountain.c file) and then create the script.


So there we have it, this is a quick peek into how Renderscript might be used. There are still a lot of unanswered questions yet, and there is still a lot more to learn about how renderscript will, and can be use, but i hope these few code snippets will at least give people a starting point. As usual, if anyone else out there has any interesting insights or comments I'd be really interested to hear them.

Friday, 20 November 2009

Android and RenderScript

It seems that the new and exciting high performance graphics technology for Android is RenderScript. There's not much detail as how to use RenderScript, but it is said to be a C-like language for high performance graphics programming, which helps you easily write efficient visual effects and animations in your Android applications. Also RenderScript isn’t released yet, as it isn’t finished. After having a dig around in the source could I found the java Libs for RenderScript here. I'm sure there will be more information on this new and exciting addition to the Android graphics framework soon. When i find out more I'll post it here. Hopefully I'll also have a few tutorials on RenderScript soon.

Tuesday, 3 November 2009

Android UI and Animations, what's new?


Over the last few days I've been looking at what's new for User interfaces and animations in the latest two releases of Android. Of course there have been some big changes such as the addition of gesture recognition and multi-touch, but i wanted to take a look at some of changes that have been less well publicised, but are still well worth knowing about. Here's some of the things that I found:



Android 1.6
For Android 1.6 here are some of the UI and Animation updates.

Interpolators
In Android 1.6 we've had a whole new set of Interpolators. Remember, Interpolators control animations. Each animation has it's own interpolator. A linear Interpolator means that the animation would be displayed at a constant rate for the animation duration, while and Accelerate Interpolator would mean that the animation would accelerate (get faster) towards the end of its animation duration. The new Interpolators are can all be found in android.view.animation and they are:

  • AnticipateInterpolator: An interpolator where the change starts backward then flings forward.
  • AnticipateOvershootInterpolator: An interpolator where the change starts backward then flings forward and overshoots the target value and finally goes back to the final value.
  • BounceInterpolator: An interpolator where the change bounces at the end.
  • OvershootInterpolator: An interpolator where the change flings forward and overshoots the last value then comes back.

For me these are an interesting addition to the animations for the android frame work. Taken together they can be used to create animations with a springy, bouncy and elastic feel, which you'll be very familiar with if you've ever used an iphone. ;)

Here's an example of using the OvershootInterpolator when applied to the ViewFlipper slide animation shown in a previous post:









Notice the slight overshoot at the end of the animation...

On click listeners
Another interesting addition to the UI framework in Android 1.6 is the ability to define click listeners in you XML layout file like this:
<Button android:onClick="myClickHandler" />

Now all you need is a public myClickHandler function that takes a View as an argument:
class MyActivity extends Activity {
public void myClickHandler(View target)
{ // Do stuff }
}
This gives the programmer a much more concise way of declaring and implementing click listeners.

For more info' on the changes to the UI framework in Android 1.6 see the post here.

Android 2.0 UI changes
For Android 2.0 we have:

New system themes
New system themes in android.R.style to easily display activities on top of the current system wallpaper. The new themes available are:
Theme_Light_WallpaperSettings
Theme_Wallpaper
Theme_WallpaperSettings
Theme_Wallpaper_NoTitleBar
Theme_Wallpaper_NoTitleBar_Fullscreen

WallpaperManager API
New WallpaperManager API replaces and extends the wallpaper APIs that were previously in Context, to allow applications to request and set the system wallpaper. Here there a two new methods in Animation called setDetachWallpaper and getDetachWallpaper. By setting Detached wallpaper to true an animation will only be applied to the window, and the wallpaper behind it will remain static.

Key Event Dispatching
KeyEvent has new key dispatching APIs, to help implement action-on-up and long press behavior, as well a new mechanism to cancel key presses (for virtual keys).

One last interesting thing on UI and Animations for Android 2.0. I've heard rumours that there is a "new exciting graphics technology" that is being worked on. I couldn't see any real evidence of this on the current android 2.0 release, but I read on blogs that there is another minor update due before the end of the year. So who knows, maybe this new graphics technology will be part of that minor release. If I find anything I will of course write about it and let others know.

Tuesday, 20 October 2009

Android emulator: simulating the sensor inputs

On it's own the android emulator cannot simulate physically moving the device, but with additional software we can emulate movement and position of a physical Android device. The software that we'll need to do this is called Sensor simulator and can be downloaded from here.






Download this zip file and unpack it. In the bin directory is a file called sensorsimulator.jar. Just click on this and it will run the sensor simulator application that will allow you to control the movement of you virtual device. You'll also need to install the SensorSimulatorSetting.apk on your emulator and set the IP address that is shown in the sensor simulator app. To install the .apk file use this command:


adb install \bin\SensorSimulatorSettings.apk

Once you have everything installed and running, you can manipulate the orientation of you virtual phone in the sensor simulator application just by rotating the wire frame representation of the mobile phone. The sensors that are currently supported are the accelerometer, compass and orientation sensors, additionally there is also support for a temperature sensor.

To use the sensor simulator in you application takes a few simple changes to the code, which are outline here. Once this is done you can now test out your application that uses sensor data in you emulator, a much quicker process than loading it onto an actual android phone.

There's more instructions on the Sensor simulator web site, but it's all straight forward and you should be able to virtually control the orientation of you emulator in no time.

Monday, 21 September 2009

Live camera preview in the Android emulator

I've been looking into getting a live camera preview working in the Android emulator. Currently the Android emulator just gives a black and white chess board animation. After having a look around I found the web site of Tom Gibara who has done some great work to get a live preview working in the emulator. The link to his work can be found here. The basics are that you run the WebcamBroadcaster as a standard java app on your PC. If there are any video devices attached to you PC, it will pick them up and broadcast the frames captured over a socket connection. You then run a SocketCamera class as part of an app in the android emulator, and as long as you have the correct ip address and port it should display the captured images in the emulator. On looking into Tom's code I saw that it seemed to be written for an older version of the Android API so I thought I'd have a go at updating it. As a starting point I'm going to use the
CameraPreview sample code available on the android developers website. My aim was to take this code and with as little changes as possible make it so it could be used to give a live camera preview in the emulator.

So the first thing I did was to create a new class called SocketCamera, this is based on Tom's version of the SocketCamera, but unlike Tom's version I am trying to implement a subset of the new camera class android.hardware.Camera and not the older class android.hardware.CameraDevice. Please keep in mind that I've implemented just a subset of the Camera class API. The code was put together fairly quickly and is a bit rough round the edges. Anyhow, here's my new SocketCamera class:


package com.example.socketcamera;

import java.io.IOException;
import java.io.InputStream;
import java.net.InetSocketAddress;
import java.net.Socket;

import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.graphics.Canvas;
import android.graphics.Paint;
import android.graphics.Rect;
import android.hardware.Camera;
import android.hardware.Camera.Size;
import android.util.Log;
import android.view.SurfaceHolder;

public class SocketCamera {

private static final String LOG_TAG = "SocketCamera:";
private static final int SOCKET_TIMEOUT = 1000;

static private SocketCamera socketCamera;
private CameraCapture capture;
private Camera parametersCamera;
private SurfaceHolder surfaceHolder;

//Set the IP address of your pc here!!
private final String address = "192.168.1.12";
private final int port = 9889;

private final boolean preserveAspectRatio = true;
private final Paint paint = new Paint();


private int width = 240;
private int height = 320;
private Rect bounds = new Rect(0, 0, width, height);

private SocketCamera() {
//Just used so that we can pass Camera.Paramters in getters and setters
parametersCamera = Camera.open();
}

static public SocketCamera open()
{
if (socketCamera == null) {
socketCamera = new SocketCamera();
}

Log.i(LOG_TAG, "Creating Socket Camera");
return socketCamera;
}

public void startPreview() {
capture = new CameraCapture();
capture.setCapturing(true);
capture.start();
Log.i(LOG_TAG, "Starting Socket Camera");

}

public void stopPreview(){
capture.setCapturing(false);
Log.i(LOG_TAG, "Stopping Socket Camera");
}

public void setPreviewDisplay(SurfaceHolder surfaceHolder) throws IOException {
this.surfaceHolder = surfaceHolder;
}

public void setParameters(Camera.Parameters parameters) {
//Bit of a hack so the interface looks like that of
Log.i(LOG_TAG, "Setting Socket Camera parameters");
parametersCamera.setParameters(parameters);
Size size = parameters.getPreviewSize();
bounds = new Rect(0, 0, size.width, size.height);
}
public Camera.Parameters getParameters() {
Log.i(LOG_TAG, "Getting Socket Camera parameters");
return parametersCamera.getParameters();
}

public void release() {
Log.i(LOG_TAG, "Releasing Socket Camera parameters");
//TODO need to implement this function
}


private class CameraCapture extends Thread {

private boolean capturing = false;

public boolean isCapturing() {
return capturing;
}

public void setCapturing(boolean capturing) {
this.capturing = capturing;
}

@Override
public void run() {
while (capturing) {
Canvas c = null;
try {
c = surfaceHolder.lockCanvas(null);
synchronized (surfaceHolder) {
Socket socket = null;
try {
socket = new Socket();
socket.bind(null);
socket.setSoTimeout(SOCKET_TIMEOUT);
socket.connect(new InetSocketAddress(address, port), SOCKET_TIMEOUT);

//obtain the bitmap
InputStream in = socket.getInputStream();
Bitmap bitmap = BitmapFactory.decodeStream(in);

//render it to canvas, scaling if necessary
if (
bounds.right == bitmap.getWidth() &&
bounds.bottom == bitmap.getHeight()) {
c.drawBitmap(bitmap, 0, 0, null);
} else {
Rect dest;
if (preserveAspectRatio) {
dest = new Rect(bounds);
dest.bottom = bitmap.getHeight() * bounds.right / bitmap.getWidth();
dest.offset(0, (bounds.bottom - dest.bottom)/2);
} else {
dest = bounds;
}
if (c != null)
{
c.drawBitmap(bitmap, null, dest, paint);
}
}

} catch (RuntimeException e) {
e.printStackTrace();

} catch (IOException e) {
e.printStackTrace();
} finally {
try {
socket.close();
} catch (IOException e) {
/* ignore */
}
}
}
} catch (Exception e) {
e.printStackTrace();
} finally {

// do this in a finally so that if an exception is thrown
// during the above, we don't leave the Surface in an
// inconsistent state
if (c != null) {
surfaceHolder.unlockCanvasAndPost(c);
}
}
}
Log.i(LOG_TAG, "Socket Camera capture stopped");
}
}

}

Make sure that you change the ip address to that of your PC.

Now we just need to make a few small modifications to the original CameraPreview. In this class look for the Preview class that extends the SurfaceView. Now we just need to comments out three lines and replace them with our own:



class Preview extends SurfaceView implements SurfaceHolder.Callback {
SurfaceHolder mHolder;
//Camera mCamera;
SocketCamera mCamera;
Preview(Context context) {
super(context);

// Install a SurfaceHolder.Callback so we get notified when the
// underlying surface is created and destroyed.
mHolder = getHolder();
mHolder.addCallback(this);
//mHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);
mHolder.setType(SurfaceHolder.SURFACE_TYPE_NORMAL);
}
public void surfaceCreated(SurfaceHolder holder) {
// The Surface has been created, acquire the camera and tell it where
// to draw.
//mCamera = Camera.open();
mCamera = SocketCamera.open();
try {
mCamera.setPreviewDisplay(holder);
} catch (IOException exception) {
mCamera.release();
mCamera = null;
// TODO: add more exception handling logic here
}
}


Here i've change three lines:

1. Camera mCamera is replaced with SocketCamera mCamera
2. mHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS); is replaced with mHolder.setType(SurfaceHolder.SURFACE_TYPE_NORMAL);
3. mCamera = Camera.open(); is replaced with mCamera = SocketCamera.open();.

So that's it.Now just make sure WebcamBroadcaster is running and start up the CameraPreview app in the Android emulator, you should now be seeing live previews in the emulator. Here's a short video of my emulator with the live preview: (Yes i know, it's me waving a book around)










Note: if the WebcamBroadcaster is not picking up your devices you most probably have a classpath issue. Make sure that you classpath points to the jmf.jar that is in the same folder as the jmf.properties file. If JMstudio works ok, its very likely that you have a classpath issue.

Oh, one last thing. I also updated the WebCamBroadcaster so that it can be used with YUV format cameras, so here's the code for that as well:

package com.webcambroadcaster;
import java.awt.Dimension;
import java.awt.image.BufferedImage;
import java.io.BufferedOutputStream;
import java.io.DataOutputStream;
import java.io.IOException;
import java.io.OutputStream;
import java.net.ServerSocket;
import java.net.Socket;
import java.util.Vector;

import javax.imageio.ImageIO;
import javax.media.Buffer;
import javax.media.CannotRealizeException;
import javax.media.CaptureDeviceInfo;
import javax.media.CaptureDeviceManager;
import javax.media.Format;
import javax.media.Manager;
import javax.media.MediaLocator;
import javax.media.NoDataSourceException;
import javax.media.NoPlayerException;
import javax.media.Player;
import javax.media.control.FrameGrabbingControl;
import javax.media.format.RGBFormat;
import javax.media.format.VideoFormat;
import javax.media.format.YUVFormat;
import javax.media.protocol.CaptureDevice;
import javax.media.protocol.DataSource;
import javax.media.util.BufferToImage;

/**
* A disposable class that uses JMF to serve a still sequence captured from a
* webcam over a socket connection. It doesn't use TCP, it just blindly
* captures a still, JPEG compresses it, and pumps it out over any incoming
* socket connection.
*
* @author Tom Gibara
*
*/

public class WebcamBroadcaster {

public static boolean RAW = false;


private static Player createPlayer(int width, int height) {
try {
Vector<CaptureDeviceInfo> devices = CaptureDeviceManager.getDeviceList(null);
for (CaptureDeviceInfo info : devices) {
DataSource source;
Format[] formats = info.getFormats();
for (Format format : formats) {
if ((format instanceof RGBFormat)) {
RGBFormat rgb = (RGBFormat) format;
Dimension size = rgb.getSize();
if (size.width != width || size.height != height) continue;
if (rgb.getPixelStride() != 3) continue;
if (rgb.getBitsPerPixel() != 24) continue;
if ( rgb.getLineStride() != width*3 ) continue;
MediaLocator locator = info.getLocator();
source = Manager.createDataSource(locator);
source.connect();
System.out.println("RGB Format Found");
((CaptureDevice)source).getFormatControls()[0].setFormat(rgb);
} else if ((format instanceof YUVFormat)) {
YUVFormat yuv = (YUVFormat) format;
Dimension size = yuv.getSize();
if (size.width != width || size.height != height) continue;
MediaLocator locator = info.getLocator();
source = Manager.createDataSource(locator);
source.connect();
System.out.println("YUV Format Found");
((CaptureDevice)source).getFormatControls()[0].setFormat(yuv);
} else {
continue;
}

return Manager.createRealizedPlayer(source);
}
}
} catch (IOException e) {
System.out.println(e.toString());
e.printStackTrace();
} catch (NoPlayerException e) {
System.out.println(e.toString());
e.printStackTrace();
} catch (CannotRealizeException e) {
System.out.println(e.toString());
e.printStackTrace();
} catch (NoDataSourceException e) {
System.out.println(e.toString());
e.printStackTrace();
}
return null;
}

public static void main(String[] args) {
int[] values = new int[args.length];
for (int i = 0; i < values.length; i++) {
values[i] = Integer.parseInt(args[i]);
}

WebcamBroadcaster wb;
if (values.length == 0) {
wb = new WebcamBroadcaster();
} else if (values.length == 1) {
wb = new WebcamBroadcaster(values[0]);
} else if (values.length == 2) {
wb = new WebcamBroadcaster(values[0], values[1]);
} else {
wb = new WebcamBroadcaster(values[0], values[1], values[2]);
}

wb.start();
}

public static final int DEFAULT_PORT = 9889;
public static final int DEFAULT_WIDTH = 320;
public static final int DEFAULT_HEIGHT = 240;

private final Object lock = new Object();

private final int width;
private final int height;
private final int port;

private boolean running;

private Player player;
private FrameGrabbingControl control;
private boolean stopping;
private Worker worker;

public WebcamBroadcaster(int width, int height, int port) {
this.width = width;
this.height = height;
this.port = port;
}

public WebcamBroadcaster(int width, int height) {
this(width, height, DEFAULT_PORT);
}

public WebcamBroadcaster(int port) {
this(DEFAULT_WIDTH, DEFAULT_HEIGHT, port);
}

public WebcamBroadcaster() {
this(DEFAULT_WIDTH, DEFAULT_HEIGHT, DEFAULT_PORT);
}

public void start() {
synchronized (lock) {
if (running) return;
player = createPlayer(width, height);
if (player == null) {
System.err.println("Unable to find a suitable player");
return;
}
System.out.println("Starting the player");
player.start();
control = (FrameGrabbingControl) player.getControl("javax.media.control.FrameGrabbingControl");
worker = new Worker();
worker.start();
System.out.println("Grabbing frames");
running = true;
}
}

public void stop() throws InterruptedException {
synchronized (lock) {
if (!running) return;
if (player != null) {
control = null;
player.stop();
player = null;
}
stopping = true;
running = false;
worker = null;
}
try {
worker.join();
} finally {
stopping = false;
}
}

private class Worker extends Thread {

private final int[] data = new int[width*height];

@Override
public void run() {
ServerSocket ss;
try {
ss = new ServerSocket(port);

} catch (IOException e) {
e.printStackTrace();
return;
}

while(true) {
FrameGrabbingControl c;
synchronized (lock) {
if (stopping) break;
c = control;
}
Socket socket = null;
try {
socket = ss.accept();

Buffer buffer = c.grabFrame();
BufferToImage btoi = new BufferToImage((VideoFormat)buffer.getFormat());
BufferedImage image = (BufferedImage) btoi.createImage(buffer);

if (image != null) {
OutputStream out = socket.getOutputStream();
if (RAW) {
image.getWritableTile(0, 0).getDataElements(0, 0, width, height, data);
image.releaseWritableTile(0, 0);
DataOutputStream dout = new DataOutputStream(new BufferedOutputStream(out));
for (int i = 0; i < data.length; i++) {
dout.writeInt(data[i]);
}
dout.close();
} else {
ImageIO.write(image, "JPEG", out);
}
}

socket.close();
socket = null;
} catch (IOException e) {
e.printStackTrace();
} finally {
if (socket != null)
try {
socket.close();
} catch (IOException e) {
/* ignore */
}
}

}

try {
ss.close();
} catch (IOException e) {
/* ignore */
}
}

}

}

Thursday, 27 August 2009

It's not just any AR it's Location Based Augmented Reality

There's a lot of talk out there at the moment on this subject, so I couldn't resist writing a few lines on it myself. Augmented reality has been around for some time, but with the introduction of smart phones with Cameras, GPS, built in compasses and acceleromters, it seems that AR is finally ready to hit the big time and find its way into the every day life of the mobile phone user.

For me there seems to be two distinct types of AR. The first is AR that uses fiducial markers, motion tracking and computer vision. These applications take a fair amount of processing power and are not really suited to mobile phones. An example of this type of AR can be seen here. The second is the Location Based AR. This seems to be driving the new interest in AR and there are a number of new Applications emerging for both Android devices and the iPhone. These types of AR applications don't need to use motion tracking or complicated computer vision systems. These applications use GPS and the built in compass of mobile devices to locate the distance and direction of POI (Points Of Interest). These points of interested are then displayed over the live camera view giving general information on the POI plus indications of its distance and direction. POIs can be anything, from twitter users, houses for sale, to interesting historical buildings.

One of the first AR applications for Android devices that i became aware of was Enkin, but since then Wikitude and Layar seem to be the two main front running Location based AR applications available for Android. Wikitude gives the user information on general points of interest around their present location, such as interesting historical buildings, while the examples on Layar show services giving details of houses for sales and nearby twitter users. Both websites seem to be offering APIs to allow third party developers to build their own versions of AR applicaitons.

AR is still commercially in it's first stages, but the possibilities seem almost limitless and who knows where future advances in technology could take this interesting area. For now I'm going to stick with Location based AR, and the compass, GPS , accelerometer and camera APIs in the Android developer docs. I'll let you know how it goes....

Monday, 24 August 2009

Android Animations 3D flip

In this post we are going to look at how to create a 3D flip animation, with a FrameLayout.

In the first few posts I've written on Android and animations we have only looked at the predefined animations supplied in the android.view.animations package. In fact we've only used the translate animation, but as I've mentioned before there are also rotate, scale and alpha animations.

In this tutorial i want to take a further look at animations and how we can created our own custom animations using the Android library. I've going to base the tutorial on some of the examples that can be found in the samples folders that are downloaded with the android SDK. These are some great examples, but unfortunately they seem to lack a little on documentation and explanation as to how, and what, the code is doing. Hopefully by the end of this tutorial things should be a little clearer.

Before we start here's an example of the end result of our 3d flip:











So lets get started. First Create a new Android project in Eclipse with these setting:
Project name: Flip3d
Application Name: Flip3d
package name: com.example.flip3d
Activity: Flip3d

I've used some firefox image icons to animate, they are part of a very good and free icon set that can be found here. If you just want the images i've used they can be found here and here. Download these images and place them in the res/drawable folder of the Flip3d project.

Lets start with defining a layout that we can use. As I mentioned we are going to use a FrameLayout. As with the ViewFlipper the FrameLayout can contain a number of child views, but unlike ViewFlipper, FrameLayout stacks it's child views on top of each other with the most recent added child on top. Initially without any intervention all the child views of the FrameLayout are visible, but don't worry about that for now, we will solve this at a later stage. In the FrameLayout we will define two child views, these are the views that we will animate between. In res/layout edit the main.xml file so that it looks like this:
<?xml version="1.0" encoding="utf-8"?>
<FrameLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:id="@+id/container"
android:layout_width="fill_parent"
android:layout_height="fill_parent">
<include android:id="@+id/notelist" layout="@layout/first_view" />
<include android:id="@+id/notelist" layout="@layout/second_view" />
</FrameLayout>
This is the frame layout and we've specified that it contains two child views, first_view and second_view. Lets Create these views. In res/layout create two files first_view.xml and second_view.xml. The first view file needs to look like this:
<RelativeLayout android:id="@+id/Layout01"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
xmlns:android="http://schemas.android.com/apk/res/android">
<mageView
android:layout_centerHorizontal="true"
android:layout_centerVertical="true"
android:layout_width="wrap_content"
android:id="@+id/ImageView01"
android:layout_height="wrap_content"
android:src="@drawable/firefox"
>
</ImageView>
</RelativeLayout>
And second_view.xml contains this xml:
<RelativeLayout android:id="@+id/Layout02"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
xmlns:android="http://schemas.android.com/apk/res/android">
<ImageView
android:layout_centerHorizontal="true"
android:layout_centerVertical="true"
android:layout_width="wrap_content"
android:id="@+id/ImageView02"
android:layout_height="wrap_content"
android:src="@drawable/firefox_alt"
>
</ImageView>
</RelativeLayout>
These two views are almost identical and only contain two different images that we will flip between.Now we have our layouts defined we need to write some Java to implement our 3d flip. Create a Java class called Flip3dAnimation in the com.example.flip3d package and cut and paste in this code:
package com.example.flip3d;

import android.graphics.Camera;
import android.graphics.Matrix;
import android.view.animation.Animation;
import android.view.animation.Transformation;

public class Flip3dAnimation extends Animation {
private final float mFromDegrees;
private final float mToDegrees;
private final float mCenterX;
private final float mCenterY;
private Camera mCamera;

public Flip3dAnimation(float fromDegrees, float toDegrees,
float centerX, float centerY) {
mFromDegrees = fromDegrees;
mToDegrees = toDegrees;
mCenterX = centerX;
mCenterY = centerY;
}

@Override
public void initialize(int width, int height, int parentWidth, int parentHeight) {
super.initialize(width, height, parentWidth, parentHeight);
mCamera = new Camera();
}

@Override
protected void applyTransformation(float interpolatedTime, Transformation t) {
final float fromDegrees = mFromDegrees;
float degrees = fromDegrees + ((mToDegrees - fromDegrees) * interpolatedTime);

final float centerX = mCenterX;
final float centerY = mCenterY;
final Camera camera = mCamera;

final Matrix matrix = t.getMatrix();

camera.save();

camera.rotateY(degrees);

camera.getMatrix(matrix);
camera.restore();

matrix.preTranslate(-centerX, -centerY);
matrix.postTranslate(centerX, centerY);

}

}
Custom Animation class

This class extends the Animation class and implements the applyTransformation method. Each Animation has a transformation object that Defines the transformation to be applied at a point in time of the Animation. When this animation is running the applyTransformation method will be called a number of times to allow us to calculate the transformation to be applied. Each time applyTransformation is called the interpolation value passed in will be increased slightly starting at 0 and ending up at 1. So our float 'degree' at line LINE NUMBER will increase slightly each time this method is called.

The main steps that happen in the applyTransformation are:

Calculate the degrees rotation for the current transformation
Get the tranformation matrix for the Animation
Generate a rotation matrix using camera.rotate(degrees)
Apply that matrix to the Animation transform
Set a pre translate so that the view is moved to the edge of the screen and rotates around it centre and not it's edge
Set a post translate so that the animated view is placed back in the centre of the screen.

Camera in android.graphics

You can see that we also use the android.graphics.Camera class, don't get this confused with camera class in android.hardware which is used to control the Camera on the android device. The Camera class in the android.graphic package is very different, it is used to calculate 3d transformations that can then be applied to animations. This Camera class represents a virtual view as if we were looking at our Android views through a camera. As we move our virtual camera to the left , we have the effect of the android view moving to the right , and if we rotate our virtual camera, we have the effect of rotating our Android view. Here the Camera class is use to calculate a rotate transformation about the Y axis, it does this every time the applyTransformation method is called and so gives each incremental transformation that is needed to give a smooth rotation effect. Camera uses transformation matrices to store and calculate transforms. It's not really necessary to have an in depth knowledge of how transformation matricies work, but a good article on them can be found here. The article is based around flash, but the same principles apply.

Now that we've got our rotation animation we need to come up with a plan of how to use it. We currently have two images but just rotating these isn't going to give us the effect we want. So what we need to do is hide the second image and display only the first. We will then rotate this image through 90 degree until it's edge on and we can no longer see it. At this point we will make the first image invisible and the second image visible, we will start the animation of the second image at a 90 degree angle and then roatate round until it is fully visible. To go back from the second to the first image we will just reverse the process.

We'll start with our main Activity class Flip3d.java:

package com.example.flip3d;

import android.app.Activity;
import android.os.Bundle;
import android.view.View;
import android.view.animation.AccelerateInterpolator;
import android.widget.ImageView;

public class Flip3d extends Activity {


private ImageView image1;
private ImageView image2;

private boolean isFirstImage = true;


/** Called when the activity is first created. */
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);

image1 = (ImageView) findViewById(R.id.ImageView01);
image2 = (ImageView) findViewById(R.id.ImageView02);
image2.setVisibility(View.GONE);

image1.setOnClickListener(new View.OnClickListener() {
public void onClick(View view) {
if (isFirstImage) {
applyRotation(0, 90);
isFirstImage = !isFirstImage;

} else {
applyRotation(0, -90);
isFirstImage = !isFirstImage;
}
}
});
}

private void applyRotation(float start, float end) {
// Find the center of image
final float centerX = image1.getWidth() / 2.0f;
final float centerY = image1.getHeight() / 2.0f;

// Create a new 3D rotation with the supplied parameter
// The animation listener is used to trigger the next animation
final Flip3dAnimation rotation =
new Flip3dAnimation(start, end, centerX, centerY);
rotation.setDuration(500);
rotation.setFillAfter(true);
rotation.setInterpolator(new AccelerateInterpolator());
rotation.setAnimationListener(new DisplayNextView(isFirstImage, image1, image2));

if (isFirstImage)
{
image1.startAnimation(rotation);
} else {
image2.startAnimation(rotation);
}

}
}
In the main activity class we have the usual onCreate method. In this method we get references to the two images that we are displaying. Here we also set the visibility of our second image to View.Gone. The final thing to be done in onCreate is to set up a click listener for our fist image (we don't need a click listener for the second image). In the click listener we have a boolean, isFirstImage, which tells us which image is currently visible, if it's true the first image is visible and if false the second image is visible. Depending on which image is visible, we call the applyRotation method with different start and end values, since we rotate the first image in a different direction to the second image.

The applyRotation method is fairly straight forward, we find the centre of our image, create a new instance of our Flip3dAnimation, and apply and start it for the visible image. Well, there's a little more yet... As you may have already noticed we've only done half the animation here, from 0 to 90 degrees, we still have another 90 degrees to go, and we still have to swap the images over half way through. So this is were we need an Animation listener, which we are going to call DisplayNextView. Create a class of this name and paste in this code:
package com.example.flip3d;

import android.view.animation.Animation;
import android.widget.ImageView;

public final class DisplayNextView implements Animation.AnimationListener {
private boolean mCurrentView;
ImageView image1;
ImageView image2;

public DisplayNextView(boolean currentView, ImageView image1, ImageView image2) {
mCurrentView = currentView;
this.image1 = image1;
this.image2 = image2;
}

public void onAnimationStart(Animation animation) {
}

public void onAnimationEnd(Animation animation) {
image1.post(new SwapViews(mCurrentView, image1, image2));
}

public void onAnimationRepeat(Animation animation) {
}
}
The DisplayNextView Animation Listener is very simple and just listens for the end of the rotate animation, Since out first rotate animation only goes from 0 to 90 onAnimatonEnd method will be called when our first image is at 90 degrees. So now we need to Swap the images and this is were out SwapViews class comes in. So create a class called swap views and paste in this code:
package com.example.flip3d;

import android.view.View;
import android.view.animation.DecelerateInterpolator;
import android.widget.ImageView;

public final class SwapViews implements Runnable {
private boolean mIsFirstView;
ImageView image1;
ImageView image2;

public SwapViews(boolean isFirstView, ImageView image1, ImageView image2) {
mIsFirstView = isFirstView;
this.image1 = image1;
this.image2 = image2;
}

public void run() {
final float centerX = image1.getWidth() / 2.0f;
final float centerY = image1.getHeight() / 2.0f;
Flip3dAnimation rotation;

if (mIsFirstView) {
image1.setVisibility(View.GONE);
image2.setVisibility(View.VISIBLE);
image2.requestFocus();

rotation = new Flip3dAnimation(-90, 0, centerX, centerY);
} else {
image2.setVisibility(View.GONE);
image1.setVisibility(View.VISIBLE);
image1.requestFocus();

rotation = new Flip3dAnimation(90, 0, centerX, centerY);
}

rotation.setDuration(500);
rotation.setFillAfter(true);
rotation.setInterpolator(new DecelerateInterpolator());

if (mIsFirstView) {
image2.startAnimation(rotation);
} else {
image1.startAnimation(rotation);
}
}
}
SwapViews does exactly what its name suggests and swaps the images, setting one image to invisible and the other to visible depending on which image was already visable, but it also does one last important thing, and that is to create and apply the last half of the animation.

You should now have a working flip animation. This technique can also be applied to ViewGroups , so it could be used as a way to transition between different views. There are a few more things that we can do to this animation to improve it such as adding a depth effect, but I'll leave that to another post. Like I mentioned earlier, this example is based on the samples that are in the downloadable SDK files, so have a look at these samples as well.

Wednesday, 12 August 2009

More on Android Animations

In a post I wrote a while back, Android transitions- slide in and slide out, I gave an example of defining some translate animations in Java. I metioned at the time that it was also possible to define these in XML. So here's a little more on that..

First of all lets just take a look at the kind of animations that we are dealing with here. Google has two main types of animations
frame by frame which use drawable objects, and tweening animations, that act on views and view groups. The animations that i used in the previous post acted on view groups and view and were of the tweening type, so thats' what i'm going to talk about in this post.

Let's take another look at one of the original translate animations that we created:
private Animation inFromRightAnimation() {

Animation inFromRight = new TranslateAnimation(
Animation.RELATIVE_TO_PARENT, +1.0f, Animation.RELATIVE_TO_PARENT, 0.0f,
Animation.RELATIVE_TO_PARENT, 0.0f, Animation.RELATIVE_TO_PARENT, 0.0f
);
inFromRight.setDuration(500);
inFromRight.setInterpolator(new AccelerateInterpolator());
return inFromRight;
}
Here you can see that we have defined a translate animation. There are four main types of animations defined for tweening animations they are Rotate, Alpha (controls transparency) , Scale (for size control), and translate which controls movement of the views and view groups. It is of course possible to define you own animations, but we'll keep that for another post...

So back to the translate animation. This animation is use to take a view from right of the visible screen and move it to the left until it appears on the screen. We create a new transaltion called infromRight using the TranslateAnimation constructor. Eight arguments are passed into the constructor, and they controll the X from and to positions, and the Y from and to position. Four of the Translate animation construtor arguments specify how the to and from positions should be interpreted in our case we specified
Animation.RELATIVE_TO_PARENT,which means that our from and to float values will be multiplied by the height or width of the parent of the object being animated. With these arguments we specify that we want out animation to start from a X position one width times the parent and end at a X position zero times the parent. We don't specify any numbers for the Y to and from since we do not want to move our view in that direction.

Once we have created our animation we then set the duration of the animation and the Interpolator algorithm to use, i.e. do we want the animation to happen at a constant speed or accelerate etc...

Ok, so that's our translate animation in Java now lets have a look at what it looks like in XML:



<translate android="http://schemas.android.com/apk/res/android"
fromxdelta="100%p" toxdelta="0%p" duration="500"
interpolator="@android:anim/accelerate_interpolator">

Here we define our from x position as 100%p , the p stands for relative to parent. We also set the duration and the interpolator. I saved this file in a res/anim folder in my android project and called it in_from_right.xml. To use it in our original Slide.java activity all we have to do is replace:

flipper.setInAnimation(inFromRightAnimation());
with:
flipper.setInAnimation(AnimationUtils.loadAnimation(context, R.anim.in_from_right));
Of course we also need to define and pass in a context.

You can also define Android Rotate , Alpha and Scale animations in XML, once you understand the basics it's quite straight forward. Have fun with the animations... :)

Tuesday, 11 August 2009