Showing posts with label How-to. Show all posts
Showing posts with label How-to. Show all posts

Wednesday, July 27, 2011

0

New Tools For Managing Screen Sizes

  • Wednesday, July 27, 2011
  • Ric RAT
  • [This post is by Dianne Hackborn and a supporting cast of thousands; Dianne’s fingerprints can be found all over the Android Application Framework — Tim Bray]

    Android 3.2 includes new tools for supporting devices with a wide range of screen sizes. One important result is better support for a new size of screen; what is typically called a “7-inch” tablet. This release also offers several new APIs to simplify developers’ work in adjusting to different screen sizes.

    This a long post. We start by discussing the why and how of Android “dp” arithmetic, and the finer points of the screen-size buckets. If you know all that stuff, you can skip down to “Introducing Numeric Selectors” to read about what’s new. We also provide our recommendations for how you can do layout selection in apps targeted at Android 3.2 and higher in a way that should allow you to support the maximum number of device geometries with the minimum amount of effort.

    Of course, the official write-up on Supporting Multiple Screens is also required reading for people working in this space.

    Understanding Screen Densities and the “dp”

    Resolution is the actual number of pixels available in the display, density is how many pixels appear within a constant area of the display, and size is the amount of physical space available for displaying your interface. These are interrelated: increase the resolution and density together, and size stays about the same. This is why the 320x480 screen on a G1 and 480x800 screen on a Droid are both the same screen size: the 480x800 screen has more pixels, but it is also higher density.

    To remove the size/density calculations from the picture, the Android framework works wherever possible in terms of "dp" units, which are corrected for density. In medium-density ("mdpi") screens, which correspond to the original Android phones, physical pixels are identical to dp's; the devices’ dimensions are 320x480 in either scale. A more recent phone might have physical-pixel dimensions of 480x800 but be a high-density device. The conversion factor from hdpi to mdpi in this case is 1.5, so for a developer's purposes, the device is 320x533 in dp's.

    Screen-size Buckets

    Android has included support for three screen-size “buckets” since 1.6, based on these “dp” units: “normal” is currently the most popular device format (originally 320x480, more recently higher-density 480x800); “small” is for smaller screens, and “large” is for “substantially larger” screens. Devices that fall in the “large” bucket include the Dell Streak and original 7” Samsung Galaxy Tab. Android 2.3 introduced a new bucket size “xlarge”, in preparation for the approximately-10” tablets (such as the Motorola Xoom) that Android 3.0 was designed to support.

    The definitions are:

    • xlarge screens are at least 960dp x 720dp.

    • large screens are at least 640dp x 480dp.

    • normal screens are at least 470dp x 320dp.

    • small screens are at least 426dp x 320dp. (Android does not currently support screens smaller than this.)

    Here are some more examples of how this works with real screens:

    • A QVGA screen is 320x240 ldpi. Converting to mdpi (a 4/3 scaling factor) gives us 426dp x 320dp; this matches the minimum size above for the small screen bucket.

    • The Xoom is a typical 10” tablet with a 1280x800 mdpi screen. This places it into the xlarge screen bucket.

    • The Dell Streak is a 800x480 mdpi screen. This places it into the bottom of the large size bucket.

    • A typical 7” tablet has a 1024x600 mdpi screen. This also counts as a large screen.

    • The original Samsung Galaxy Tab is an interesting case. Physically it is a 1024x600 7” screen and thus classified as “large”. However the device configures its screen as hdpi, which means after applying the appropriate ⅔ scaling factor the actual space on the screen is 682dp x 400dp. This actually moves it out of the “large” bucket and into a “normal” screen size. The Tab actually reports that it is “large”; this was a mistake in the framework’s computation of the size for that device that we made. Today no devices should ship like this.

    Issues With Buckets

    Based on developers’ experience so far, we’re not convinced that this limited set of screen-size buckets gives developers everything they need in adapting to the increasing variety of Android-device shapes and sizes. The primary problem is that the borders between the buckets may not always correspond to either devices available to consumers or to the particular needs of apps.

    The “normal” and “xlarge” screen sizes should be fairly straightforward as a target: “normal” screens generally require single panes of information that the user moves between, while “xlarge” screens can comfortably hold multi-pane UIs (even in portrait orientation, with some tightening of the space).

    The “small” screen size is really an artifact of the original Android devices having 320x480 screens. 240x320 screens have a shorter aspect ratio, and applications that don’t take this into account can break on them. These days it is good practice to test user interfaces on a small screen to ensure there are no serious problems.

    The “large” screen size has been challenging for developers — you will notice that it encompases everything from the Dell Streak to the original Galaxy Tab to 7" tablets in general. Different applications may also reasonably want to take different approaches to these two devices; it is also quite reasonable to want to have different behavior for landscape vs. portrait large devices because landscape has plenty of space for a multi-pane UI, while portrait may not.

    Introducing Numeric Selectors

    Android 3.2 introduces a new approach to screen sizes, with the goal of making developers' lives easier. We have defined a set of numbers describing device screen sizes, which you can use to select resources or otherwise adjust your UI. We believe that using these will not only reduce developers’ workloads, but future-proof their apps significantly.

    The numbers describing the screen size are all in “dp” units (remember that your layout dimensions should also be in dp units so that the system can adjust for screen density). They are:

    • width dp: the current width available for application layout in “dp” units; changes when the screen switches orientation between landscape and portrait.

    • height dp: the current height available for application layout in “dp” units; also changes when the screen switches orientation.

    • smallest width dp: the smallest width available for application layout in “dp” units; this is the smallest width dp that you will ever encounter in any rotation of the display.

    Of these, smallest width dp is the most important. It replaces the old screen-size buckets with a continuous range of numbers giving the effective size. This number is based on width because that is fairly universally the driving factor in designing a layout. A UI will often scroll vertically, but have fairly hard constraints on the minimum space it needs horizontally; the available width is also the key factor in determining whether to use a phone-style one-pane layout or tablet-style multi-pane layout.

    Typical numbers for screen width dp are:

    • 320: a phone screen (240x320 ldpi, 320x480 mdpi, 480x800 hdpi, etc).

    • 480: a tweener tablet like the Streak (480x800 mdpi).

    • 600: a 7” tablet (600x1024).

    • 720: a 10” tablet (720x1280, 800x1280, etc).

    Using the New Selectors

    When you are designing your UI, the main thing you probably care about is where you switch between a phone-style UI and a tablet-style multi-pane UI. The exact point of this switch will depend on your particular design — maybe you need a full 720dp width for your tablet layout, maybe 600dp is enough, or 480dp, or even some other number between those. Either pick a width and design to it; or after doing your design, find the smallest width it supports.

    Now you can select your layout resources for phones vs. tablets using the number you want. For example, if 600dp is the smallest width for your tablet UI, you can do this:

    res/layout/main_activity.xml           # For phones
    res/layout-sw600dp/main_activity.xml # For tablets

    For the rare case where you want to further customize your UI, for example for 7” vs. 10” tablets, you can define additional smallest widths:

    res/layout/main_activity.xml           # For phones
    res/layout-sw600dp/main_activity.xml # For 7” tablets
    res/layout-sw720dp/main_activity.xml # For 10” tablets

    Android will pick the resource that is closest to the device’s “smallest width,” without being larger; so for a hypothetical 700dp x 1200dp tablet, it would pick layout-sw600dp.

    If you want to get fancier, you can make a layout that can change when the user switches orientation to the one that best fits in the current available width. This can be of particular use for 7” tablets, where a multi-pane layout is a very tight fit in portrait::

    res/layout/main_activity.xml          # Single-pane
    res/layout-w600dp/main_activity.xml # Multi-pane when enough width

    Or the previous three-layout example could use this to switch to the full UI whenever there is enough width:

    res/layout/main_activity.xml                 # For phones
    res/layout-sw600dp/main_activity.xml # Tablets
    res/layout-sw600dp-w720dp/main_activity.xml # Large width

    In the setup above, we will always use the phone layout for devices whose smallest width is less than 600dp; for devices whose smallest width is at least 600dp, we will switch between the tablet and large width layouts depending on the current available width.

    You can also mix in other resource qualifiers:

    res/layout/main_activity.xml                 # For phones
    res/layout-sw600dp/main_activity.xml # Tablets
    res/layout-sw600dp-port/main_activity.xml # Tablets when portrait

    Selector Precedence

    While it is safest to specify multiple configurations like this to avoid potential ambiguity, you can also take advantage of some subtleties of resource matching. For example, the order that resource qualifiers must be specified in the directory name (documented in Providing Resources) is also the order of their “importance.” Earlier ones are more important than later ones. You can take advantage of this to, for example, easily have a landscape orientation specialization for your default layout:

    res/layout/main_activity.xml                 # For phones
    res/layout-land/main_activity.xml # For phones when landscape
    res/layout-sw600dp/main_activity.xml # Tablets

    In this case when running on a tablet that is using landscape orientation, the last layout will be used because the “swNNNdp” qualifier is a better match than “port”.

    Combinations and Versions

    One final thing we need to address is specifying layouts that work on both Android 3.2 and up as well as previous versions of the platform.

    Previous versions of the platform will ignore any resources using the new resource qualifiers. This, then, is one approach that will work:

    res/layout/main_activity.xml           # For phones
    res/layout-xlarge/main_activity.xml # For pre-3.2 tablets
    res/layout-sw600dp/main_activity.xml # For 3.2 and up tablets

    This does require, however, that you have two copies of your tablet layout. One way to avoid this is by defining the tablet layout once as a distinct resource, and then making new versions of the original layout resource that point to it. So the layout resources we would have are:

    res/layout/main_activity.xml           # For phones
    res/layout/main_activity_tablet.xml # For tablets

    To have the original layout point to the tablet version, you put <item> specifications in the appropriate values directories. That is these two files:

    res/values-xlarge/layout.xml
    res/values-sw600dp/layout.xml

    Both would contain the following XML defining the desired resource:

    <?xml version="1.0" encoding="utf-8"?>
    <resources>
    <item type="layout" name="main_activty">
    @layout/main_activity_tablet
    </item>
    </resources>

    Of course, you can always simply select the resource to use in code. That is, define two or more resources like “layout/main_activity” and “layout/main_activity_tablet,” and select the one to use in your code based on information in the Configuration object or elsewhere. For example:

    public class MyActivity extends Activity {
    @Override protected void onCreate(Bundle savedInstanceState) {
    super.onCreate();

    Configuration config = getResources().getConfiguration();
    if (config.smallestScreenWidthDp >= 600) {
    setContentView(R.layout.main_activity_tablet);
    } else {
    setContentView(R.layout.main_activity);
    }
    }
    }

    Conclusion

    We strongly recommend that developers start using the new layout selectors for apps targeted at Android release 3.2 or higher, as we will be doing for Google apps. We think they will make your layout choices easier to express and manage.

    Furthermore, we can see a remarkably wide variety of Android-device form factors coming down the pipe. This is a good thing, and will expand the market for your work. These new layout selectors are specifically designed to make it straightforward for you to make your apps run well in a future hardware ecosystem which is full of variety (and surprises).

    read more

    Monday, March 14, 2011

    0

    Android 3.0 Hardware Acceleration

  • Monday, March 14, 2011
  • Ric RAT
  • [This post is by Romain Guy, who likes things on your screen to move fast. —Tim Bray]

    One of the biggest changes we made to Android in this release is the addition of a new rendering pipeline so that applications can benefit from hardware accelerated 2D graphics. Hardware accelerated graphics is nothing new to the Android platform, it has always been used for windows composition or OpenGL games for instance, but with this new rendering pipeline applications can benefit from an extra boost in performance. On a Motorola Xoom device, all the standard applications like Browser and Calendar use hardware-accelerated 2D graphics.

    In this article, I will show you how to enable the hardware accelerated 2D graphics pipeline in your application and give you a few tips on how to use it properly.

    Go faster

    To enable the hardware accelerated 2D graphics, open your AndroidManifest.xml file and add the following attribute to the <application /> tag:

        android:hardwareAccelerated="true"

    If your application uses only standard widgets and drawables, this should be all you need to do. Once hardware acceleration is enabled, all drawing operations performed on a View's Canvas are performed using the GPU.

    If you have custom drawing code you might need to do a bit more, which is in part why hardware acceleration is not enabled by default. And it's why you might want to read the rest of this article, to understand some of the important details of acceleration.

    Controlling hardware acceleration

    Because of the characteristics of the new rendering pipeline, you might run into issues with your application. Problems usually manifest themselves as invisible elements, exceptions or different-looking pixels. To help you, Android gives you 4 different ways to control hardware acceleration. You can enable or disable it on the following elements:

    • Application
    • Activity
    • Window
    • View

    To enable or disable hardware acceleration at the application or activity level, use the XML attribute mentioned earlier. The following snippet enables hardware acceleration for the entire application but disables it for one activity:

        <application android:hardwareAccelerated="true">
    <activity ... />
    <activity android:hardwareAccelerated="false" />
    </application>

    If you need more fine-grained control, you can enable hardware acceleration for a given window at runtime:

        getWindow().setFlags(
    WindowManager.LayoutParams.FLAG_HARDWARE_ACCELERATED,
    WindowManager.LayoutParams.FLAG_HARDWARE_ACCELERATED);

    Note that you currently cannot disable hardware acceleration at the window level. Finally, hardware acceleration can be disabled on individual views:

        view.setLayerType(View.LAYER_TYPE_SOFTWARE, null);

    Layer types have many other usages that will be described later.

    Am I hardware accelerated?

    It is sometimes useful for an application, or more likely a custom view, to know whether it currently is hardware accelerated. This is particularly useful if your application does a lot of custom drawing and not all operations are properly supported by the new rendering pipeline.

    There are two different ways to check whether the application is hardware accelerated:

    If you must do this check in your drawing code, it is highly recommended to use Canvas.isHardwareAccelerated() instead of View.isHardwareAccelerated(). Indeed, even when a View is attached to a hardware accelerated window, it can be drawn using a non-hardware accelerated Canvas. This happens for instance when drawing a View into a bitmap for caching purpose.

    What drawing operations are supported?

    The current hardware accelerated 2D pipeline supports the most commonly used Canvas operations, and then some. We implemented all the operations needed to render the built-in applications, all the default widgets and layouts, and common advanced visual effects (reflections, tiled textures, etc.) There are however a few operations that are currently not supported, but might be in a future version of Android:

    • Canvas
      • clipPath
      • clipRegion
      • drawPicture
      • drawPoints
      • drawPosText
      • drawTextOnPath
      • drawVertices
    • Paint
      • setLinearText
      • setMaskFilter
      • setRasterizer

    In addition, some operations behave differently when hardware acceleration enabled:

    • Canvas
      • clipRect: XOR, Difference and ReverseDifference clip modes are ignored; 3D transforms do not apply to the clip rectangle
      • drawBitmapMesh: colors array is ignored
      • drawLines: anti-aliasing is not supported
      • setDrawFilter: can be set, but ignored
    • Paint
      • setDither: ignored
      • setFilterBitmap: filtering is always on
      • setShadowLayer: works with text only
    • ComposeShader
      • A ComposeShader can only contain shaders of different types (a BitmapShader and a LinearGradientShader for instance, but not two instances of BitmapShader)
      • A ComposeShader cannot contain a ComposeShader

    If drawing code in one of your views is affected by any of the missing features or limitations, you don't have to miss out on the advantages of hardware acceleration for your overall application. Instead, consider rendering the problematic view into a bitmap or setting its layer type to LAYER_TYPE_SOFTWARE. In both cases, you will switch back to the software rendering pipeline.

    Dos and don'ts

    Switching to hardware accelerated 2D graphics is a great way to get smoother animations and faster rendering in your application but it is by no means a magic bullet. Your application should be designed and implemented to be GPU friendly. It is easier than you might think if you follow these recommendations:

    • Reduce the number of Views in your application: the more Views the system has to draw, the slower it will be. This applies to the software pipeline as well; it is one of the easiest ways to optimize your UI.
    • Avoid overdraw: always make sure that you are not drawing too many layers on top of each other. In particular, make sure to remove any Views that are completely obscured by other opaque views on top of it. If you need to draw several layers blended on top of each other consider merging them into a single one. A good rule of thumb with current hardware is to not draw more than 2.5 times the number of pixels on screen per frame (and transparent pixels in a bitmap count!)
    • Don't create render objects in draw methods: a common mistake is to create a new Paint, or a new Path, every time a rendering method is invoked. This is not only wasteful, forcing the system to run the GC more often, it also bypasses caches and optimizations in the hardware pipeline.
    • Don't modify shapes too often: complex shapes, paths and circles for instance, are rendered using texture masks. Every time you create or modify a Path, the hardware pipeline must create a new mask, which can be expensive.
    • Don't modify bitmaps too often: every time you change the content of a bitmap, it needs to be uploaded again as a GPU texture the next time you draw it.
    • Use alpha with care: when a View is made translucent using View.setAlpha(), an AlphaAnimation or an ObjectAnimator animating the “alpha” property, it is rendered in an off-screen buffer which doubles the required fill-rate. When applying alpha on very large views, consider setting the View's layer type to LAYER_TYPE_HARDWARE.

    View layers

    Since Android 1.0, Views have had the ability to render into off-screen buffers, either by using a View's drawing cache, or by using Canvas.saveLayer(). Off-screen buffers, or layers, have several interesting usages. They can be used to get better performance when animating complex Views or to apply composition effects. For instance, fade effects are implemented by using Canvas.saveLayer() to temporarily render a View into a layer and then compositing it back on screen with an opacity factor.

    Because layers are so useful, Android 3.0 gives you more control on how and when to use them. To to so, we have introduced a new API called View.setLayerType(int type, Paint p). This API takes two parameters: the type of layer you want to use and an optional Paint that describes how the layer should be composited. The paint parameter may be used to apply color filters, special blending modes or opacity to a layer. A View can use one of 3 layer types:

    • LAYER_TYPE_NONE: the View is rendered normally, and is not backed by an off-screen buffer.
    • LAYER_TYPE_HARDWARE: the View is rendered in hardware into a hardware texture if the application is hardware accelerated. If the application is not hardware accelerated, this layer type behaves the same as LAYER_TYPE_SOFTWARE.
    • LAYER_TYPE_SOFTWARE: the View is rendered in software into a bitmap

    The type of layer you will use depends on your goal:

    • Performance: use a hardware layer type to render a View into a hardware texture. Once a View is rendered into a layer, its drawing code does not have to be executed until the View calls invalidate(). Some animations, for instance alpha animations, can then be applied directly onto the layer, which is very efficient for the GPU to do.
    • Visual effects: use a hardware or software layer type and a Paint to apply special visual treatments to a View. For instance, you can draw a View in black and white using a ColorMatrixColorFilter.
    • Compatibility: use a software layer type to force a View to be rendered in software. This is an easy way to work around limitations of the hardware rendering pipeline.

    Layers and animations

    Hardware-accelerated 2D graphics help deliver a faster and smoother user experience, especially when it comes to animations. Running an animation at 60 frames per second is not always possible when animating complex views that issue a lot of drawing operations. If you are running an animation in your application and do not obtain the smooth results you want, consider enabling hardware layers on your animated views.

    When a View is backed by a hardware layer, some of its properties are handled by the way the layer is composited on screen. Setting these properties will be efficient because they do not require the view to be invalidated and redrawn. Here is the list of properties that will affect the way the layer is composited; calling the setter for any of these properties will result in optimal invalidation and no redraw of the targeted View:

    • alpha: to change the layer's opacity
    • x, y, translationX, translationY: to change the layer's position
    • scaleX, scaleY: to change the layer's size
    • rotation, rotationX, rotationY: to change the layer's orientation in 3D space
    • pivotX, pivotY: to change the layer's transformations origin

    These properties are the names used when animating a View with an ObjectAnimator. If you want to set/get these properties, call the appropriate setter or getter. For instance, to modify the alpha property, call setAlpha(). The following code snippet shows the most efficient way to rotate a View in 3D around the Y axis:

        view.setLayerType(View.LAYER_TYPE_HARDWARE, null);
    ObjectAnimator.ofFloat(view, "rotationY", 180).start();

    Since hardware layers consume video memory, it is highly recommended you enable them only for the duration of the animation. This can be achieved with animation listeners:

        view.setLayerType(View.LAYER_TYPE_HARDWARE, null);
    ObjectAnimator animator = ObjectAnimator.ofFloat(
    view, "rotationY", 180);
    animator.addListener(new AnimatorListenerAdapter() {
    @Override
    public void onAnimationEnd(Animator animation) {
    view.setLayerType(View.LAYER_TYPE_NONE, null);
    }
    });
    animator.start();

    New drawing model

    Along with hardware-accelerated 2D graphics, Android 3.0 introduces another major change in the UI toolkit’s drawing model: display lists, which are only enabled when hardware acceleration is turned on. To fully understand display lists and how they may affect your application it is important to also understand how Views are drawn.

    Whenever an application needs to update a part of its UI, it invokes invalidate() (or one of its variants) on any View whose content has changed. The invalidation messages are propagated all the way up the view hierarchy to compute the dirty region; the region of the screen that needs to be redrawn. The system then draws any View in the hierarchy that intersects with the dirty region. The drawing model is therefore made of two stages:

    1. Invalidate the hierarchy
    2. Draw the hierarchy

    There are unfortunately two drawbacks to this approach. First, this drawing model requires execution of a lot of code on every draw pass. Imagine for instance your application calls invalidate() on a button and that button sits on top of a more complex View like a MapView. When it comes time to draw, the drawing code of the MapView will be executed even though the MapView itself has not changed.

    The second issue with that approach is that it can hide bugs in your application. Since views are redrawn anytime they intersect with the dirty region, a View whose content you changed might be redrawn even though invalidate() was not called on it. When this happens, you are relying on another View getting invalidated to obtain the proper behavior. Needless to say, this behavior can change every time you modify your application ever so slightly. Remember this rule: always call invalidate() on a View whenever you modify data or state that affects this View’s drawing code. This applies only to custom code since setting standard properties, like the background color or the text in a TextView, will cause invalidate() to be called properly internally.

    Android 3.0 still relies on invalidate() to request screen updates and draw() to render views. The difference is in how the drawing code is handled internally. Rather than executing the drawing commands immediately, the UI toolkit now records them inside display lists. This means that display lists do not contain any logic, but rather the output of the view hierarchy’s drawing code. Another interesting optimization is that the system only needs to record/update display lists for views marked dirty by an invalidate() call; views that have not been invalidated can be redrawn simply by re-issuing the previously recorded display list. The new drawing model now contains 3 stages:

    1. Invalidate the hierarchy
    2. Record/update display lists
    3. Draw the display lists

    With this model, you cannot rely on a View intersecting the dirty region to have its draw() method executed anymore: to ensure that a View’s display list will be recorded, you must call invalidate(). This kind of bug becomes very obvious with hardware acceleration turned on and is easy to fix: you would see the previous content of a View after changing it.

    Using display lists also benefits animation performance. In the previous section, we saw that setting specific properties (alpha, rotation, etc.) does not require invalidating the targeted View. This optimization also applies to views with display lists (any View when your application is hardware accelerated.) Let’s imagine you want to change the opacity of a ListView embedded inside a LinearLayout, above a Button. Here is what the (simplified) display list of the LinearLayout looks like before changing the list’s opacity:

        DrawDisplayList(ListView)
    DrawDisplayList(Button)

    After invoking listView.setAlpha(0.5f) the display list now contains this:

        SaveLayerAlpha(0.5)
    DrawDisplayList(ListView)
    Restore
    DrawDisplayList(Button)

    The complex drawing code of ListView was not executed. Instead the system only updated the display list of the much simpler LinearLayout. In previous versions of Android, or in an application without hardware acceleration enabled, the drawing code of both the list and its parent would have to be executed again.

    It’s your turn

    Enabling hardware accelerated 2D graphics in your application is easy, particularly if you rely solely on standard views and drawables. Just keep in mind the few limitations and potential issues described in this document and make sure to thoroughly test your application!

    read more

    Wednesday, November 11, 2009

    0

    Integrating Application with Intents

  • Wednesday, November 11, 2009
  • Ric RAT
  • Written in collaboration with Michael Burton, Mob.ly; Ivan Mitrovic, uLocate; and Josh Garnier, OpenTable.

    OpenTable, uLocate, and Mob.ly worked together to create a great user experience on Android. We saw an opportunity to enable WHERE and GoodFood users to make reservations on OpenTable easily and seamlessly. This is a situation where everyone wins — OpenTable gets more traffic, WHERE and GoodFood gain functionality to make their applications stickier, and users benefit because they can make reservations with only a few taps of a finger. We were able to achieve this deep integration between our applications by using Android's Intent mechanism. Intents are perhaps one of Android's coolest, most unique, and under-appreciated features. Here's how we exploited them to compose a new user experience from parts each of us have.

    Designing

    One of the first steps is to design your Intent interface, or API. The main public Intent that OpenTable exposes is the RESERVE Intent, which lets you make a reservation at a specific restaurant and optionally specify the date, time, and party size.

    Hereʼs an example of how to make a reservation using the RESERVE Intent:

    startActivity(new Intent("com.opentable.action.RESERVE",
    Uri.parse("reserve://opentable.com/2947?partySize=3")));

    Our objective was to make it simple and clear to the developer using the Intent. So how did we decide what it would look like?

    First, we needed an Action. We considered using Intent.ACTION_VIEW, but decided this didn't map well to making a reservation, so we made up a new action. Following the conventions of the Android platform (roughly <package-name>.action.<action-name>), we chose "com.opentable.action.RESERVE". Actions really are just strings, so it's important to namespace them. Not all applications will need to define their own actions. In fact, common actions such as Intent.ACTION_VIEW (aka "android.intent.action.VIEW") are often a better choice if youʼre not doing something unusual.

    Next we needed to determine how data would be sent in our Intent. We decided to have the data encoded in a URI, although you might choose to receive your data as a collection of items in the Intent's data Bundle. We used a scheme of "reserve:" to be consistent with our action. We then put our domain authority and the restaurant ID into the URI path since it was required, and we shunted off all of the other, optional inputs to URI query parameters.

    Exposing

    Once we knew what we wanted the Intent to look like, we needed to register the Intent with the system so Android would know to start up the OpenTable application. This is done by inserting an Intent filter into the appropriate Activity declaration in AndroidManifest.xml:

    <activity android:name=".activity.Splash" ... >
    ...
    <intent-filter>
    <action android:name="com.opentable.action.RESERVE"/>
    <category android:name="android.intent.category.DEFAULT" />
    <data android:scheme="reserve" android:host="opentable.com"/>
    </intent-filter>
    ...
    </activity>

    In our case, we wanted users to see a brief OpenTable splash screen as we loaded up details about their restaurant selection, so we put the Intent Filter in the splash Activity definition. We set our category to be DEFAULT. This will ensure our application is launched without asking the user what application to use, as long as no other Activities also list themselves as default for this action.

    Notice that things like the URI query parameter ("partySize" in our example) are not specified by the Intent filter. This is why documentation is key when defining your Intents, which weʼll talk about a bit later.

    Processing

    Now the only thing left to do was write the code to handle the intent.

        protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    final Uri uri;
    final int restaurantId;
    try {
    uri = getIntent().getData();
    restaurantId = Integer.parseInt( uri.getPathSegments().get(0));
    } catch(Exception e) {
    // Restaurant ID is required
    Log.e(e);
    startActivity( FindTable.start(FindTablePublic.this));
    finish();
    return;
    }
    final String partySize = uri.getQueryParameter("partySize");
    ...
    }

    Although this is not quite all the code, you get the idea. The hardest part here was the error handling. OpenTable wanted to be able to gracefully handle erroneous Intents that might be sent by partner applications, so if we have any problem parsing the restaurant ID, we pass the user off to another Activity where they can find the restaurant manually. It's important to verify the input just as you would in a desktop or web application to protect against injection attacks that might harm your app or your users.

    Calling and Handling Uncertainty with Grace

    Actually invoking the target application from within the requester is quite straight-forward, but there are a few cases we need to handle. What if OpenTable isn't installed? What if WHERE or GoodFood doesn't know the restaurant ID?



    Restaurant ID knownRestaurant ID unknown
    User has OpenTableCall OpenTable IntentDon't show reserve button
    User doesn't have OpenTableCall Market IntentDon't show reserve button

    You'll probably wish to work with your partner to decide exactly what to do if the user doesn't have the target application installed. In this case, we decided we would take the user to Android Market to download OpenTable if s/he wished to do so.

        public void showReserveButton() {

    // setup the Intent to call OpenTable
    Uri reserveUri = Uri.parse(String.format( "reserve://opentable.com/%s?refId=5449",
    opentableId));
    Intent opentableIntent = new Intent("com.opentable.action.RESERVE", reserveUri);

    // setup the Intent to deep link into Android Market
    Uri marketUri = Uri.parse("market://search?q=pname:com.opentable");
    Intent marketIntent = new Intent(Intent.ACTION_VIEW).setData(marketUri);

    opentableButton.setVisibility(opentableId > 0 ? View.VISIBLE : View.GONE);
    opentableButton.setOnClickListener(new Button.OnClickListener() {
    public void onClick(View v) {
    PackageManager pm = getPackageManager();
    startActivity(pm.queryIntentActivities(opentableIntent, 0).size() == 0 ?
    opentableIntent : marketIntent);
    }
    });
    }

    In the case where the ID for the restaurant is unavailable, whether because they don't take reservations or they aren't part of the OpenTable network, we simply hide the reserve button.



    Publishing the Intent Specification

    Now that all the technical work is done, how can you get other developers to use your Intent-based API besides 1:1 outreach? The answer is simple: publish documentation on your website. This makes it more likely that other applications will link to your functionality and also makes your application available to a wider community than you might otherwise reach.

    If there's an application that you'd like to tap into that doesn't have any published information, try contacting the developer. It's often in their best interest to encourage third parties to use their APIs, and if they already have an API sitting around, it might be simple to get you the documentation for it.

    Summary

    It's really just this simple. Now when any of us is in a new city or just around the neighborhood its easy to check which place is the new hot spot and immediately grab an available table. Its great to not need to find a restaurant in one application, launch OpenTable to see if there's a table, find out there isn't, launch the first application again, and on and on. We hope you'll find this write-up useful as you develop your own public intents and that you'll consider sharing them with the greater Android community.

    read more

    Friday, October 23, 2009

    0

    UI framework changes in Android 1.6

  • Friday, October 23, 2009
  • Ric RAT
  • Android 1.6 introduces numerous enhancements and bug fixes in the UI framework. Today, I'd like to highlight three two improvements in particular.

    Optimized drawing

    The UI toolkit introduced in Android 1.6 is aware of which views are opaque and can use this information to avoid drawing views that the user will not be able to see. Before Android 1.6, the UI toolkit would sometimes perform unnecessary operations by drawing a window background when it was obscured by a full-screen opaque view. A workaround was available to avoid this, but the technique was limited and required work on your part. With Android 1.6, the UI toolkit determines whether a view is opaque by simply querying the opacity of the background drawable. If you know that your view is going to be opaque but that information does not depend on the background drawable, you can simply override the method called isOpaque():

    @Override
    public boolean isOpaque() {
    return true;
    }

    The value returned by isOpaque() does not have to be constant and can change at any time. For instance, the implementation of ListView in Android 1.6 indicates that a list is opaque only when the user is scrolling it.

    Updated: Our apologies—we spoke to soon about isOpaque(). It will be available in a future update to the Android platform.

    More flexible, more robust RelativeLayout

    RelativeLayout is the most versatile layout offered by the Android UI toolkit and can be successfully used to reduce the number of views created by your applications. This layout used to suffer from various bugs and limitations, sometimes making it difficult to use without having some knowledge of its implementation. To make your life easier, Android 1.6 comes with a revamped RelativeLayout. This new implementation not only fixes all known bugs in RelativeLayout (let us know when you find new ones) but also addresses its major limitation: the fact that views had to be declared in a particular order. Consider the following XML layout:

    <?xml version="1.0" encoding="utf-8"?>

    <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
    android:layout_width="fill_parent"
    android:layout_height="64dip"
    android:padding="6dip">

    <TextView
    android:id="@+id/band"
    android:layout_width="fill_parent"
    android:layout_height="26dip"

    android:layout_below="@+id/track"
    android:layout_alignLeft="@id/track"
    android:layout_alignParentBottom="true"

    android:gravity="top"
    android:text="The Airborne Toxic Event" />

    <TextView
    android:id="@id/track"
    android:layout_marginLeft="6dip"
    android:layout_width="fill_parent"
    android:layout_height="26dip"

    android:layout_toRightOf="@+id/artwork"

    android:textAppearance="?android:attr/textAppearanceMedium"
    android:gravity="bottom"
    android:text="Sometime Around Midnight" />

    <ImageView
    android:id="@id/artwork"
    android:layout_width="56dip"
    android:layout_height="56dip"
    android:layout_gravity="center_vertical"

    android:src="@drawable/artwork" />

    </RelativeLayout>

    This code builds a very simple layout—an image on the left with two lines of text stacked vertically. This XML layout is perfectly fine and contains no errors. Unfortunately, Android 1.5's RelativeLayout is incapable of rendering it correctly, as shown in the screenshot below.

    The problem is that this layout uses forward references. For instance, the "band" TextView is positioned below the "track" TextView but "track" is declared after "band" and, in Android 1.5, RelativeLayout does not know how to handle this case. Now look at the exact same layout running on Android 1.6:

    As you can see Android 1.6 is now better able to handle forward reference. The result on screen is exactly what you would expect when writing the layout.

    Easier click listeners

    Setting up a click listener on a button is very common task, but it requires quite a bit of boilerplate code:

    findViewById(R.id.myButton).setOnClickListener(new View.OnClickListener() {
    public void onClick(View v) {
    // Do stuff
    }
    });

    One way to reduce the amount of boilerplate is to share a single click listener between several buttons. While this technique reduces the number of classes, it still requires a fair amount of code and it still requires giving each button an id in your XML layout file:

    View.OnClickListener handler = View.OnClickListener() {
    public void onClick(View v) {
    switch (v.getId()) {
    case R.id.myButton: // doStuff
    break;
    case R.id.myOtherButton: // doStuff
    break;
    }
    }
    }

    findViewById(R.id.myButton).setOnClickListener(handler);
    findViewById(R.id.myOtherButton).setOnClickListener(handler);

    With Android 1.6, none of this is necessary. All you have to do is declare a public method in your Activity to handle the click (the method must have one View argument):

    class MyActivity extends Activity {
    public void myClickHandler(View target) {
    // Do stuff
    }
    }

    And then reference this method from your XML layout:

    <Button android:onClick="myClickHandler" />

    This new feature reduces both the amount of Java and XML you have to write, leaving you more time to concentrate on your application.

    The Android team is committed to helping you write applications in the easiest and most efficient way possible. We hope you find these improvements useful and we're excited to see your applications on Android Market.

    read more

    Thursday, October 8, 2009

    0

    Support for additional screen resolutions and densities in Android

  • Thursday, October 8, 2009
  • Ric RAT
  • You may have heard that one of the key changes introduced in Android 1.6 is support for new screen sizes. This is one of the things that has me very excited about Android 1.6 since it means Android will start becoming available on so many more devices. However, as a developer, I know this also means a bit of additional work. That's why we've spent quite a bit of time making it as easy as possible for you to update your apps to work on these new screen sizes.

    To date, all Android devices (such as the T-Mobile G1 and Samsung I7500, among others) have had HVGA (320x480) screens. The essential change in Android 1.6 is that we've expanded support to include three different classes of screen sizes:

    • small: devices with a screen size smaller than the T-Mobile G1 or Samsung I7500, for example the recently announced HTC Tattoo
    • normal: devices with a screen size roughly the same as the G1 or I7500.
    • large: devices with a screen size larger than the G1 or I7500 (such as a tablet-style device.)

    Any given device will fall into one of those three groups. As a developer, you can control if and how your app appears to devices in each group by using a few tools we've introduced in the Android framework APIs and SDK. The documentation at the developer site describes each of these tools in detail, but here they are in a nutshell:

    • new attributes in AndroidManifest for an application to specify what kind of screens it supports,
    • framework-level support for using image drawables/layouts correctly regardless of screen size,
    • a compatibility mode for existing applications, providing a pseudo-HVGA environment, and descriptions of compatible device resolutions and minimum diagonal sizes.

    The documentation also provides a quick checklist and testing tips for developers to ensure their apps will run correctly on devices of any screen size.

    Once you've upgraded your app using Android 1.6 SDK, you'll need to make sure your app is only available to users whose phones can properly run it. To help you with that, we've also added some new tools to Android Market.

    Until the next time you upload a new version of your app to Android Market, we will assume that it works for normal-class screen sizes. This means users with normal-class and large-class screens will have access to these apps. Devices with "large" screens simply run these apps in a compatibility mode, which simulates an HVGA environment on the larger screen.

    Devices with small-class screens, however, will only be shown apps which explicitly declare (via the AndroidManifest) that they will run properly on small screens. In our studies, we found that "squeezing" an app designed for a larger screen onto a smaller screen often produces a bad result. To prevent users with small screens from getting a bad impression of your app (and reviewing it negatively!), Android Market makes sure that they can't see it until you upload a new version that declares itself compatible.

    We expect small-class screens, as well as devices with additional resolutions in Table 1 in the developer document to hit the market in time for the holiday season. Note that not all devices will be upgraded to Android 1.6 at the same time. There will be significant number of users still with Android 1.5 devices. To use the same apk to target Android 1.5 devices and Android 1.6 devices, build your apps using Android 1.5 SDK and test your apps on both Android 1.5 and 1.6 system images to make sure they continue to work well on both types of devices. If you want to target small-class devices like HTC Tattoo, please build your app using the Android 1.6 SDK. Note that if your application requires Android 1.6 features, but does not support a screen class, you need to set the appropriate attributes to false. To use optimized assets for normal-class, high density devices like WVGA, or for low density devices please use the Android 1.6 SDK.

    read more

    Monday, October 5, 2009

    0

    Gestures on Android 1.6

  • Monday, October 5, 2009
  • Ric RAT
  • Touch screens are a great way to interact with applications on mobile devices. With a touch screen, users can easily tap, drag, fling, or slide to quickly perform actions in their favorite applications. But it's not always that easy for developers. With Android, it's easy to recognize simple actions, like a swipe, but it's much more difficult to handle complicated gestures, which also require developers to write a lot of code. That's why we have decided to introduce a new gestures API in Android 1.6. This API, located in the new package android.gesture, lets you store, load, draw and recognize gestures. In this post I will show you how you can use the android.gesture API in your applications. Before going any further, you should download the source code of the examples.

    Creating a gestures library

    The Android 1.6 SDK comes with a new application pre-installed on the emulator, called Gestures Builder. You can use this application to create a set of pre-defined gestures for your own application. It also serves as an example of how to let the user define his own gestures in your applications. You can find the source code of Gestures Builders in the samples directory of Android 1.6. In our example we will use Gestures Builder to generate a set of gestures for us (make sure to create an AVD with an SD card image to use Gestures Builder.) The screenshot below shows what the application looks like after adding a few gestures:

    As you can see, a gesture is always associated with a name. That name is very important because it identifies each gesture within your application. The names do not have to be unique. Actually it can be very useful to have several gestures with the same name to increase the precision of the recognition. Every time you add or edit a gesture in the Gestures Builder, a file is generated on the emulator's SD card, /sdcard/gestures. This file contains the description of all the gestures, and you will need to package it inside your application inside the resources directory, in /res/raw.

    Loading the gestures library

    Now that you have a set of pre-defined gestures, you must load it inside your application. This can be achieved in several ways but the easiest is to use the GestureLibraries class:

    mLibrary = GestureLibraries.fromRawResource(this, R.raw.spells);
    if (!mLibrary.load()) {
    finish();
    }

    In this example, the gesture library is loaded from the file /res/raw/spells. You can easily load libraries from other sources, like the SD card, which is very important if you want your application to be able to save the library; a library loaded from a raw resource is read-only and cannot be modified. The following diagram shows the structure of a library:

    Recognizing gestures

    To start recognizing gestures in your application, all you have to do is add a GestureOverlayView to your XML layout:

    <android.gesture.GestureOverlayView
    android:id="@+id/gestures"
    android:layout_width="fill_parent"
    android:layout_height="0dip"
    android:layout_weight="1.0" />

    Notice that the GestureOverlayView is not part of the usual android.widget package. Therefore, you must use its fully qualified name. A gesture overlay acts as a simple drawing board on which the user can draw his gestures. You can tweak several visual properties, like the color and the width of the stroke used to draw gestures, and register various listeners to follow what the user is doing. The most commonly used listener is GestureOverlayView.OnGesturePerformedListener which fires whenever a user is done drawing a gesture:

    GestureOverlayView gestures = (GestureOverlayView) findViewById(R.id.gestures);
    gestures.addOnGesturePerformedListener(this);

    When the listener fires, you can ask the GestureLibrary to try to recognize the gesture. In return, you will get a list of Prediction instances, each with a name - the same name you entered in the Gestures Builder - and a score. The list is sorted by descending scores; the higher the score, the more likely the associated gesture is the one the user intended to draw. The following code snippet demonstrates how to retrieve the name of the first prediction:

    public void onGesturePerformed(GestureOverlayView overlay, Gesture gesture) {
    ArrayList predictions = mLibrary.recognize(gesture);

    // We want at least one prediction
    if (predictions.size() > 0) {
    Prediction prediction = predictions.get(0);
    // We want at least some confidence in the result
    if (prediction.score > 1.0) {
    // Show the spell
    Toast.makeText(this, prediction.name, Toast.LENGTH_SHORT).show();
    }
    }
    }

    In this example, the first prediction is taken into account only if it's score is greater than 1.0. The threshold you use is entirely up to you but know that scores lower than 1.0 are typically poor matches. And this is all the code you need to create a simple application that can recognize pre-defined gestures (see the source code of the project GesturesDemo):

    Gestures overlay

    In the example above, the GestureOverlayView was used as a normal view, embedded inside a LinearLayout. However, as its name suggests, it can also be used as an overlay on top of other views. This can be useful to recognize gestures in a game or just anywhere in the UI of an application. In the second example, called GesturesListDemo, we'll create an overlay on top of a list of contacts. We start again in Gestures Builder to create a new set of pre-defined gestures:

    And here is what the XML layout looks like:

    <android.gesture.GestureOverlayView
    xmlns:android="http://schemas.android.com/apk/res/android"
    android:id="@+id/gestures"
    android:layout_width="fill_parent"
    android:layout_height="fill_parent"

    android:gestureStrokeType="multiple"
    android:eventsInterceptionEnabled="true"
    android:orientation="vertical">

    <ListView
    android:id="@android:id/list"
    android:layout_width="fill_parent"
    android:layout_height="fill_parent" />

    </android.gesture.GestureOverlayView>

    In this application, the gestures view is an overlay on top of a regular ListView. The overlay also specifies a few properties that we did not need before:

    • gestureStrokeType: indicates whether we want to recognize gestures made of a single stroke or multiple strokes. Since one of our gestures is the "+" symbol, we need multiple strokes
    • eventsInterceptionEnabled: when set to true, this property tells the overlay to steal the events from its children as soon as it knows the user is really drawing a gesture. This is useful when there's a scrollable view under the overlay, to avoid scrolling the underlying child as the user draws his gesture
    • orientation: indicates the scroll orientation of the views underneath. In this case the list scrolls vertically, which means that any horizontal gestures (like action_delete) can immediately be recognized as a gesture. Gestures that start with a vertical stroke must contain at least one horizontal component to be recognized. In other words, a simple vertical line cannot be recognized as a gesture since it would conflict with the list's scrolling.

    The code used to load and set up the gestures library and overlay is exactly the same as before. The only difference is that we now check the name of the predictions to know what the user intended to do:

    public void onGesturePerformed(GestureOverlayView overlay, Gesture gesture) {
    ArrayList<Prediction> predictions = mLibrary.recognize(gesture);
    if (predictions.size() > 0 && predictions.get(0).score > 1.0) {
    String action = predictions.get(0).name;
    if ("action_add".equals(action)) {
    Toast.makeText(this, "Adding a contact", Toast.LENGTH_SHORT).show();
    } else if ("action_delete".equals(action)) {
    Toast.makeText(this, "Removing a contact", Toast.LENGTH_SHORT).show();
    } else if ("action_refresh".equals(action)) {
    Toast.makeText(this, "Reloading contacts", Toast.LENGTH_SHORT).show();
    }
    }
    }

    The user is now able to draw his gestures on top of the list without interfering with the scrolling:

    The overlay even gives visual clues as to whether the gesture is considered valid for recognition. In the case of a vertical overlay, for instance, a single vertical stroke cannot be recognized as a gesture and is therefore drawn with a translucent color:

    It's your turn

    Adding support for gestures in your application is easy and can be a valuable addition. The gestures API does not even have to be used to recognize complex shapes; it will work equally well to recognize simple swipes. We are very excited by the possibilities the gestures API offers, and we're eager to see what cool applications the community will create with it.

    read more

    Monday, September 28, 2009

    0

    Zipalign: an easy optimization

  • Monday, September 28, 2009
  • Ric RAT
  • The Android 1.6 SDK includes a tool called zipalign that optimizes the way an application is packaged. Doing this enables Android to interact with your application more efficiently and thus has the potential to make your application and the overall system run faster. We strongly encourage you to use zipalign on both new and already published applications and to make the optimized version available—even if your application targets a previous version of Android. We'll get into more detail on what zipalign does, how to use it, and why you'll want to do so in the rest of this post.

    In Android, data files stored in each application's apk are accessed by multiple processes: the installer reads the manifest to handle the permissions associated with that application; the Home application reads resources to get the application's name and icon; the system server reads resources for a variety of reasons (e.g. to display that application's notifications); and last but not least, the resource files are obviously used by the application itself.

    The resource-handling code in Android can efficiently access resources when they're aligned on 4-byte boundaries by memory-mapping them. But for resources that are not aligned (i.e. when zipalign hasn't been run on an apk), it has to fall back to explicitly reading them—which is slower and consumes additional memory.

    For an application developer like you, this fallback mechanism is very convenient. It provides a lot of flexibility by allowing for several different development methods, including those that don't include aligning resources as part of their normal flow.

    Unfortunately, the situation is reversed for users—reading resources from unaligned apks is slow and takes a lot of memory. In the best case, the only visible result is that both the Home application and the unaligned application launch slower than they otherwise should. In the worst case, installing several applications with unaligned resources increases memory pressure, thus causing the system to thrash around by having to constantly start and kill processes. The user ends up with a slow device with a poor battery life.

    Luckily, it's very easy to align the resources:

    • Using ADT:
      • ADT (starting with 0.9.3) will automatically align release application packages if the export wizard is used to create them. To use the wizard, right click the project and choose "Android Tools" > "Export Signed Application Package..." It can also be accessed from the first page of the AndroidManifest.xml editor.
    • Using Ant:
      • The Ant build script that targets Android 1.6 (API level 4) can align application packages. Targets for older versions of the Android platform are not aligned by the Ant build script and need to be manually aligned.
      • Debug packages built with Ant for Android 1.6 applications are aligned and signed by default.
      • Release packages are aligned automatically only if Ant has enough information to sign the packages, since aligning has to happen after signing. In order to be able to sign packages, and therefore to align them, Ant needs to know the location of the keystore and the name of the key in build.properties. The name of the properties are key.store and key.alias respectively. If those properties are present, the signing tool will prompt to enter the store/key passwords during the build, and the script will sign and then align the apk file. If the properties are missing, the release package will not be signed, and therefore will not get aligned either.
    • Manually:
      • In order to manually align a package, zipalign is in the tools folder of the Android 1.6 SDK. It can be used on application packages targeting any version of Android. It should be run after signing the apk file, using the following command:
        zipalign -v 4 source.apk destination.apk
    • Verifying alignment:
      • The following command verifies that a package is aligned:
        zipalign -c -v 4 application.apk

    We encourage you manually run zipalign on your currently published applications and to make the newly aligned versions available to users. And don't forget to align any new applications going forward!

    read more

    Wednesday, September 23, 2009

    0

    An introduction to Text-To-Speech in Android

  • Wednesday, September 23, 2009
  • Ric RAT
  • We've introduced a new feature in version 1.6 of the Android platform: Text-To-Speech (TTS). Also known as "speech synthesis", TTS enables your Android device to "speak" text of different languages.

    Before we explain how to use the TTS API itself, let's first review a few aspects of the engine that will be important to your TTS-enabled application. We will then show how to make your Android application talk and how to configure the way it speaks.

    Languages and resources

    About the TTS resources

    The TTS engine that ships with the Android platform supports a number of languages: English, French, German, Italian and Spanish. Also, depending on which side of the Atlantic you are on, American and British accents for English are both supported.

    The TTS engine needs to know which language to speak, as a word like "Paris", for example, is pronounced differently in French and English. So the voice and dictionary are language-specific resources that need to be loaded before the engine can start to speak.

    Although all Android-powered devices that support the TTS functionality ship with the engine, some devices have limited storage and may lack the language-specific resource files. If a user wants to install those resources, the TTS API enables an application to query the platform for the availability of language files and can initiate their download and installation. So upon creating your activity, a good first step is to check for the presence of the TTS resources with the corresponding intent:

    Intent checkIntent = new Intent();
    checkIntent.setAction(TextToSpeech.Engine.ACTION_CHECK_TTS_DATA);
    startActivityForResult(checkIntent, MY_DATA_CHECK_CODE);

    A successful check will be marked by a CHECK_VOICE_DATA_PASS result code, indicating this device is ready to speak, after the creation of our android.speech.tts.TextToSpeech object. If not, we need to let the user know to install the data that's required for the device to become a multi-lingual talking machine! Downloading and installing the data is accomplished by firing off the ACTION_INSTALL_TTS_DATA intent, which will take the user to Android Market, and will let her/him initiate the download. Installation of the data will happen automatically once the download completes. Here is an example of what your implementation of onActivityResult() would look like:

    private TextToSpeech mTts;
    protected void onActivityResult(
    int requestCode, int resultCode, Intent data) {
    if (requestCode == MY_DATA_CHECK_CODE) {
    if (resultCode == TextToSpeech.Engine.CHECK_VOICE_DATA_PASS) {
    // success, create the TTS instance
    mTts = new TextToSpeech(this, this);
    } else {
    // missing data, install it
    Intent installIntent = new Intent();
    installIntent.setAction(
    TextToSpeech.Engine.ACTION_INSTALL_TTS_DATA);
    startActivity(installIntent);
    }
    }
    }

    In the constructor of the TextToSpeech instance we pass a reference to the Context to be used (here the current Activity), and to an OnInitListener (here our Activity as well). This listener enables our application to be notified when the Text-To-Speech engine is fully loaded, so we can start configuring it and using it.

    Languages and Locale

    At Google I/O, we showed an example of TTS where it was used to speak the result of a translation from and to one of the 5 languages the Android TTS engine currently supports. Loading a language is as simple as calling for instance:

    mTts.setLanguage(Locale.US);

    to load and set the language to English, as spoken in the country "US". A locale is the preferred way to specify a language because it accounts for the fact that the same language can vary from one country to another. To query whether a specific Locale is supported, you can use isLanguageAvailable(), which returns the level of support for the given Locale. For instance the calls:

    mTts.isLanguageAvailable(Locale.UK))
    mTts.isLanguageAvailable(Locale.FRANCE))
    mTts.isLanguageAvailable(new Locale("spa", "ESP")))

    will return TextToSpeech.LANG_COUNTRY_AVAILABLE to indicate that the language AND country as described by the Locale parameter are supported (and the data is correctly installed). But the calls:

    mTts.isLanguageAvailable(Locale.CANADA_FRENCH))
    mTts.isLanguageAvailable(new Locale("spa"))

    will return TextToSpeech.LANG_AVAILABLE. In the first example, French is supported, but not the given country. And in the second, only the language was specified for the Locale, so that's what the match was made on.

    Also note that besides the ACTION_CHECK_TTS_DATA intent to check the availability of the TTS data, you can also use isLanguageAvailable() once you have created your TextToSpeech instance, which will return TextToSpeech.LANG_MISSING_DATA if the required resources are not installed for the queried language.

    Making the engine speak an Italian string while the engine is set to the French language will produce some pretty interesting results, but it will not exactly be something your user would understand So try to match the language of your application's content and the language that you loaded in your TextToSpeech instance. Also if you are using Locale.getDefault() to query the current Locale, make sure that at least the default language is supported.

    Making your application speak

    Now that our TextToSpeech instance is properly initialized and configured, we can start to make your application speak. The simplest way to do so is to use the speak() method. Let's iterate on the following example to make a talking alarm clock:

    String myText1 = "Did you sleep well?";
    String myText2 = "I hope so, because it's time to wake up.";
    mTts.speak(myText1, TextToSpeech.QUEUE_FLUSH, null);
    mTts.speak(myText2, TextToSpeech.QUEUE_ADD, null);

    The TTS engine manages a global queue of all the entries to synthesize, which are also known as "utterances". Each TextToSpeech instance can manage its own queue in order to control which utterance will interrupt the current one and which one is simply queued. Here the first speak() request would interrupt whatever was currently being synthesized: the queue is flushed and the new utterance is queued, which places it at the head of the queue. The second utterance is queued and will be played after myText1 has completed.

    Using optional parameters to change the playback stream type

    On Android, each audio stream that is played is associated with one stream type, as defined in android.media.AudioManager. For a talking alarm clock, we would like our text to be played on the AudioManager.STREAM_ALARM stream type so that it respects the alarm settings the user has chosen on the device. The last parameter of the speak() method allows you to pass to the TTS engine optional parameters, specified as key/value pairs in a HashMap. Let's use that mechanism to change the stream type of our utterances:

    HashMap<String, String> myHashAlarm = new HashMap();
    myHashAlarm.put(TextToSpeech.Engine.KEY_PARAM_STREAM,
    String.valueOf(AudioManager.STREAM_ALARM));
    mTts.speak(myText1, TextToSpeech.QUEUE_FLUSH, myHashAlarm);
    mTts.speak(myText2, TextToSpeech.QUEUE_ADD, myHashAlarm);

    Using optional parameters for playback completion callbacks

    Note that speak() calls are asynchronous, so they will return well before the text is done being synthesized and played by Android, regardless of the use of QUEUE_FLUSH or QUEUE_ADD. But you might need to know when a particular utterance is done playing. For instance you might want to start playing an annoying music after myText2 has finished synthesizing (remember, we're trying to wake up the user). We will again use an optional parameter, this time to tag our utterance as one we want to identify. We also need to make sure our activity implements the TextToSpeech.OnUtteranceCompletedListener interface:

    mTts.setOnUtteranceCompletedListener(this);
    myHashAlarm.put(TextToSpeech.Engine.KEY_PARAM_STREAM,
    String.valueOf(AudioManager.STREAM_ALARM));
    mTts.speak(myText1, TextToSpeech.QUEUE_FLUSH, myHashAlarm);
    myHashAlarm.put(TextToSpeech.Engine.KEY_PARAM_UTTERANCE_ID,
    "end of wakeup message ID");
    // myHashAlarm now contains two optional parameters
    mTts.speak(myText2, TextToSpeech.QUEUE_ADD, myHashAlarm);

    And the Activity gets notified of the completion in the implementation of the listener:

    public void onUtteranceCompleted(String uttId) {
    if (uttId == "end of wakeup message ID") {
    playAnnoyingMusic();
    }
    }

    File rendering and playback

    While the speak() method is used to make Android speak the text right away, there are cases where you would want the result of the synthesis to be recorded in an audio file instead. This would be the case if, for instance, there is text your application will speak often; you could avoid the synthesis CPU-overhead by rendering only once to a file, and then playing back that audio file whenever needed. Just like for speak(), you can use an optional utterance identifier to be notified on the completion of the synthesis to the file:

    HashMap<String, String> myHashRender = new HashMap();
    String wakeUpText = "Are you up yet?";
    String destFileName = "/sdcard/myAppCache/wakeUp.wav";
    myHashRender.put(TextToSpeech.Engine.KEY_PARAM_UTTERANCE_ID, wakeUpText);
    mTts.synthesizeToFile(wakuUpText, myHashRender, destFileName);

    Once you are notified of the synthesis completion, you can play the output file just like any other audio resource with android.media.MediaPlayer.

    But the TextToSpeech class offers other ways of associating audio resources with speech. So at this point we have a WAV file that contains the result of the synthesis of "Wake up" in the previously selected language. We can tell our TTS instance to associate the contents of the string "Wake up" with an audio resource, which can be accessed through its path, or through the package it's in, and its resource ID, using one of the two addSpeech() methods:

    mTts.addSpeech(wakeUpText, destFileName);

    This way any call to speak() for the same string content as wakeUpText will result in the playback of destFileName. If the file is missing, then speak will behave as if the audio file wasn't there, and will synthesize and play the given string. But you can also take advantage of that feature to provide an option to the user to customize how "Wake up" sounds, by recording their own version if they choose to. Regardless of where that audio file comes from, you can still use the same line in your Activity code to ask repeatedly "Are you up yet?":

    mTts.speak(wakeUpText, TextToSpeech.QUEUE_ADD, myHashAlarm);

    When not in use...

    The text-to-speech functionality relies on a dedicated service shared across all applications that use that feature. When you are done using TTS, be a good citizen and tell it "you won't be needing its services anymore" by calling mTts.shutdown(), in your Activity onDestroy() method for instance.

    Conclusion

    Android now talks, and so can your apps. Remember that in order for synthesized speech to be intelligible, you need to match the language you select to that of the text to synthesize. Text-to-speech can help you push your app in new directions. Whether you use TTS to help users with disabilities, to enable the use of your application while looking away from the screen, or simply to make it cool, we hope you'll enjoy this new feature.

    read more

    Thursday, September 17, 2009

    0

    Introducing Quick Search Box for Android

  • Thursday, September 17, 2009
  • Ric RAT
  • One of the new features we're really proud of in the Android 1.6 release is Quick Search Box for Android. This is our new system-wide search framework, which makes it possible for users to quickly and easily find what they're looking for, both on their devices and on the web. It suggests content on your device as you type, like apps, contacts, browser history, and music. It also offers results from the web search suggestions, local business listings, and other info from Google, such as stock quotes, weather, and flight status. All of this is available right from the home screen, by tapping on Quick Search Box (QSB).

    What we're most excited about with this new feature is the ability for you, the developers, to leverage the QSB framework to provide quicker and easier access to the content inside your apps. Your apps can provide search suggestions that will surface to users in QSB alongside other search results and suggestions. This makes it possible for users to access your application's content from outside your application—for example, from the home screen.

    The code fragments below are related to a new demo app for Android 1.6 called Searchable Dictionary.


    The story before now: searching within your app

    In previous releases, we already provided a mechanism for you to expose search and search suggestions in your app as described in the docs for SearchManager. This mechanism has not changed and requires the following two things in your AndroidManifest.xml:

    1) In your <activity>, an intent filter, and a reference to a searchable.xml file (described below):

    <intent-filter>
    <action android:name="android.intent.action.SEARCH" />
    <category android:name="android.intent.category.DEFAULT" />
    </intent-filter>

    <meta-data android:name="android.app.searchable"
    android:resource="@xml/searchable" />

    2) A content provider that can provide search suggestions according to the URIs and column formats specified by the Search Suggestions section of the SearchManager docs:

    <!-- Provides search suggestions for words and their definitions. -->
    <provider android:name="DictionaryProvider"
    android:authorities="dictionary"
    android:syncable="false" />

    In the searchable.xml file, you specify a few things about how you want the search system to present search for your app, including the authority of the content provider that provides suggestions for the user as they type. Here's an example of the searchable.xml of an Android app that provides search suggestions within its own activities:

    <searchable xmlns:android="http://schemas.android.com/apk/res/android"
    android:label="@string/search_label"
    android:searchSuggestAuthority="dictionary"
    android:searchSuggestIntentAction="android.intent.action.VIEW">
    </searchable>

    Note that the android:searchSuggestAuthority attribute refers to the authority of the content provider we declared in AndroidManifest.xml.

    For more details on this, see the Searchability Metadata section of the SearchManager docs.

    Including your app in Quick Search Box

    In Android 1.6, we added a new attribute to the metadata for searchables: android:includeInGlobalSearch. By specifying this as "true" in your searchable.xml, you allow QSB to pick up your search suggestion content provider and include its suggestions along with the rest (if the user enables your suggestions from the system search settings).

    You should also specify a string value for android:searchSettingsDescription, which describes to users what sorts of suggestions your app provides in the system settings for search.

    <searchable xmlns:android="http://schemas.android.com/apk/res/android"
    android:label="@string/search_label"
    android:searchSettingsDescription="@string/settings_description"
    android:includeInGlobalSearch="true"
    android:searchSuggestAuthority="dictionary"
    android:searchSuggestIntentAction="android.intent.action.VIEW">
    </searchable>

    These new attributes are supported only in Android 1.6 and later.

    What to expect

    The first and most important thing to note is that when a user installs an app with a suggestion provider that participates in QSB, this new app will not be enabled for QSB by default. The user can choose to enable particular suggestion sources from the system settings for search (by going to "Search" > "Searchable items" in settings).

    You should consider how to handle this in your app. Perhaps show a notice that instructs the user to visit system settings and enable your app's suggestions.

    Once the user enables your searchable item, the app's suggestions will have a chance to show up in QSB, most likely under the "more results" section to begin with. As your app's suggestions are chosen more frequently, they can move up in the list.

    Shortcuts

    One of our objectives with QSB is to make it faster for users to access the things they access most often. One way we've done this is by 'shortcutting' some of the previously chosen search suggestions, so they will be shown immediately as the user starts typing, instead of waiting to query the content providers. Suggestions from your app may be chosen as shortcuts when the user clicks on them.

    For dynamic suggestions that may wish to change their content (or become invalid) in the future, you can provide a 'shortcut id'. This tells QSB to query your suggestion provider for up-to-date content for a suggestion after it has been displayed. For more details on how to manage shortcuts, see the Shortcuts section within the SearchManager docs.


    QSB provides a really cool way to make your app's content quicker to access by users. To help you get your app started with it, we've created a demo app which simply provides access to a small dictionary of words in QSB—it's called Searchable Dictionary, and we encourage you to check it out.

    read more

    Wednesday, May 6, 2009

    0

    Painless threading

  • Wednesday, May 6, 2009
  • Ric RAT
  • Whenever you first start an Android application, a thread called "main" is automatically created. The main thread, also called the UI thread, is very important because it is in charge of dispatching the events to the appropriate widgets and this includes the drawing events. It is also the thread you interact with Android widgets on. For instance, if you touch the a button on screen, the UI thread dispatches the touch event to the widget which in turn sets its pressed state and posts an invalidate request to the event queue. The UI thread dequeues the request and notifies the widget to redraw itself.

    This single thread model can yield poor performance in Android applications that do not consider the implications. Since everything happens on a single thread performing long operations, like network access or database queries, on this thread will block the whole user interface. No event can be dispatched, including drawing events, while the long operation is underway. From the user's perspective, the application appears hung. Even worse, if the UI thread is blocked for more than a few seconds (about 5 seconds currently) the user is presented with the infamous "application not responding" (ANR) dialog.

    If you want to see how bad this can look, write a simple application with a button that invokes Thread.sleep(2000) in its OnClickListener. The button will remain in its pressed state for about 2 seconds before going back to its normal state. When this happens, it is very easy for the user to perceive the application as slow.

    Now that you know you must avoid lengthy operations on the UI thread, you will probably use extra threads (background or worker threads) to perform these operations, and rightly so. Let's take the example of a click listener downloading an image over the network and displaying it in an ImageView:

    public void onClick(View v) {
    new Thread(new Runnable() {
    public void run() {
    Bitmap b = loadImageFromNetwork();
    mImageView.setImageBitmap(b);
    }
    }).start();
    }

    At first, this code seems to be a good solution to your problem, as it does not block the UI thread. Unfortunately, it violates the single thread model: the Android UI toolkit is not thread-safe and must always be manipulated on the UI thread. In this piece of code, the ImageView is manipulated on a worker thread, which can cause really weird problems. Tracking down and fixing such bugs can be difficult and time-consuming.

    Android offers several ways to access the UI thread from other threads. You may already be familiar with some of them but here is a comprehensive list:

    Any of these classes and methods could be used to correct our previous code example:

    public void onClick(View v) {
    new Thread(new Runnable() {
    public void run() {
    final Bitmap b = loadImageFromNetwork();
    mImageView.post(new Runnable() {
    public void run() {
    mImageView.setImageBitmap(b);
    }
    });
    }
    }).start();
    }

    Unfortunately, these classes and methods also tend to make your code more complicated and more difficult to read. It becomes even worse when your implement complex operations that require frequent UI updates. To remedy this problem, Android 1.5 offers a new utility class, called AsyncTask, that simplifies the creation of long-running tasks that need to communicate with the user interface.

    AsyncTask is also available for Android 1.0 and 1.1 under the name UserTask. It offers the exact same API and all you have to do is copy its source code in your application.

    The goal of AsyncTask is to take care of thread management for you. Our previous example can easily be rewritten with AsyncTask:

    public void onClick(View v) {
    new DownloadImageTask().execute("http://example.com/image.png");
    }

    private class DownloadImageTask extends AsyncTask {
    protected Bitmap doInBackground(String... urls) {
    return loadImageFromNetwork(urls[0]);
    }

    protected void onPostExecute(Bitmap result) {
    mImageView.setImageBitmap(result);
    }
    }

    As you can see, AsyncTask must be used by subclassing it. It is also very important to remember that an AsyncTask instance has to be created on the UI thread and can be executed only once. You can read the AsyncTask documentation for a full understanding on how to use this class, but here is a quick overview of how it works:

    In addition to the official documentation, you can read several complex examples in the source code of Shelves (ShelvesActivity.java and AddBookActivity.java) and Photostream (LoginActivity.java, PhotostreamActivity.java and ViewPhotoActivity.java). I highly recommend reading the source code of Shelves to see how to persist tasks across configuration changes and how to cancel them properly when the activity is destroyed.

    Regardless of whether or not you use AsyncTask, always remember these two rules about the single thread model: do not block the UI thread and make sure the Android UI toolkit is only accessed on the UI thread. AsyncTask just makes it easier to do both of these things.

    If you want to learn more cool techniques, come join us at Google I/O. Members of the Android team will be there to give a series of in-depth technical sessions and answer all your questions.

    read more

    Subscribe