Sunday, 1 December 2013

Important meanings

mention (a number of things) one by one.

How to instantiate I2C devices

How to instantiate I2C devices

Unlike PCI or USB devices, I2C devices are not enumerated at the hardware
level. Instead, the software must know which devices are connected on each
I2C bus segment, and what address these devices are using. For this
reason, the kernel code must instantiate I2C devices explicitly. There are
several ways to achieve this, depending on the context and requirements.

Method 1: Declare the I2C devices by bus number

This method is appropriate when the I2C bus is a system bus as is the case
for many embedded systems. On such systems, each I2C bus has a number
which is known in advance. It is thus possible to pre-declare the I2C
devices which live on this bus. This is done with an array of struct
i2c_board_info which is registered by calling i2c_register_board_info().

Example (from omap2 h4):

static struct i2c_board_info h4_i2c_board_info[] __initdata = {
  I2C_BOARD_INFO("isp1301_omap", 0x2d),
  .irq  = OMAP_GPIO_IRQ(125),
 { /* EEPROM on mainboard */
  I2C_BOARD_INFO("24c01", 0x52),
  .platform_data = &m24c01,
 { /* EEPROM on cpu card */
  I2C_BOARD_INFO("24c01", 0x57),
  .platform_data = &m24c01,

static void __init omap_h4_init(void)
 i2c_register_board_info(1, h4_i2c_board_info,

The above code declares 3 devices on I2C bus 1, including their respective
addresses and custom data needed by their drivers. When the I2C bus in
question is registered, the I2C devices will be instantiated automatically
by i2c-core.

The devices will be automatically unbound and destroyed when the I2C bus
they sit on goes away (if ever.)

Method 2: Instantiate the devices explicitly

This method is appropriate when a larger device uses an I2C bus for
internal communication. A typical case is TV adapters. These can have a
tuner, a video decoder, an audio decoder, etc. usually connected to the
main chip by the means of an I2C bus. You won't know the number of the I2C
bus in advance, so the method 1 described above can't be used. Instead,
you can instantiate your I2C devices explicitly. This is done by filling
a struct i2c_board_info and calling i2c_new_device().

Example (from the sfe4001 network driver):

static struct i2c_board_info sfe4001_hwmon_info = {
 I2C_BOARD_INFO("max6647", 0x4e),

int sfe4001_init(struct efx_nic *efx)
 efx->board_info.hwmon_client =
  i2c_new_device(&efx->i2c_adap, &sfe4001_hwmon_info);


The above code instantiates 1 I2C device on the I2C bus which is on the
network adapter in question.

A variant of this is when you don't know for sure if an I2C device is
present or not (for example for an optional feature which is not present
on cheap variants of a board but you have no way to tell them apart), or
it may have different addresses from one board to the next (manufacturer
changing its design without notice). In this case, you can call
i2c_new_probed_device() instead of i2c_new_device().

Example (from the nxp OHCI driver):

static const unsigned short normal_i2c[] = { 0x2c, 0x2d, I2C_CLIENT_END };

static int usb_hcd_nxp_probe(struct platform_device *pdev)
 struct i2c_adapter *i2c_adap;
 struct i2c_board_info i2c_info;

 i2c_adap = i2c_get_adapter(2);
 memset(&i2c_info, 0, sizeof(struct i2c_board_info));
 strlcpy(i2c_info.type, "isp1301_nxp", I2C_NAME_SIZE);
 isp1301_i2c_client = i2c_new_probed_device(i2c_adap, &i2c_info,
         normal_i2c, NULL);

The above code instantiates up to 1 I2C device on the I2C bus which is on
the OHCI adapter in question. It first tries at address 0x2c, if nothing
is found there it tries address 0x2d, and if still nothing is found, it
simply gives up.

The driver which instantiated the I2C device is responsible for destroying
it on cleanup. This is done by calling i2c_unregister_device() on the
pointer that was earlier returned by i2c_new_device() or

Method 3: Probe an I2C bus for certain devices

Sometimes you do not have enough information about an I2C device, not even
to call i2c_new_probed_device(). The typical case is hardware monitoring
chips on PC mainboards. There are several dozen models, which can live
at 25 different addresses. Given the huge number of mainboards out there,
it is next to impossible to build an exhaustive list of the hardware
monitoring chips being used. Fortunately, most of these chips have
manufacturer and device ID registers, so they can be identified by

In that case, I2C devices are neither declared nor instantiated
explicitly. Instead, i2c-core will probe for such devices as soon as their
drivers are loaded, and if any is found, an I2C device will be
instantiated automatically. In order to prevent any misbehavior of this
mechanism, the following restrictions apply:
* The I2C device driver must implement the detect() method, which
  identifies a supported device by reading from arbitrary registers.
* Only buses which are likely to have a supported device and agree to be
  probed, will be probed. For example this avoids probing for hardware
  monitoring chips on a TV adapter.

See lm90_driver and lm90_detect() in drivers/hwmon/lm90.c

I2C devices instantiated as a result of such a successful probe will be
destroyed automatically when the driver which detected them is removed,
or when the underlying I2C bus is itself destroyed, whichever happens

Those of you familiar with the i2c subsystem of 2.4 kernels and early 2.6
kernels will find out that this method 3 is essentially similar to what
was done there. Two significant differences are:
* Probing is only one way to instantiate I2C devices now, while it was the
  only way back then. Where possible, methods 1 and 2 should be preferred.
  Method 3 should only be used when there is no other way, as it can have
  undesirable side effects.
* I2C buses must now explicitly say which I2C driver classes can probe
  them (by the means of the class bitfield), while all I2C buses were
  probed by default back then. The default is an empty class which means
  that no probing happens. The purpose of the class bitfield is to limit
  the aforementioned undesirable side effects.

Once again, method 3 should be avoided wherever possible. Explicit device
instantiation (methods 1 and 2) is much preferred for it is safer and

Method 4: Instantiate from user-space

In general, the kernel should know which I2C devices are connected and
what addresses they live at. However, in certain cases, it does not, so a
sysfs interface was added to let the user provide the information. This
interface is made of 2 attribute files which are created in every I2C bus
directory: new_device and delete_device. Both files are write only and you
must write the right parameters to them in order to properly instantiate,
respectively delete, an I2C device.

File new_device takes 2 parameters: the name of the I2C device (a string)
and the address of the I2C device (a number, typically expressed in
hexadecimal starting with 0x, but can also be expressed in decimal.)

File delete_device takes a single parameter: the address of the I2C
device. As no two devices can live at the same address on a given I2C
segment, the address is sufficient to uniquely identify the device to be

# echo eeprom 0x50 > /sys/bus/i2c/devices/i2c-3/new_device

While this interface should only be used when in-kernel device declaration
can't be done, there is a variety of cases where it can be helpful:
* The I2C driver usually detects devices (method 3 above) but the bus
  segment your device lives on doesn't have the proper class bit set and
  thus detection doesn't trigger.
* The I2C driver usually detects devices, but your device lives at an
  unexpected address.
* The I2C driver usually detects devices, but your device is not detected,
  either because the detection routine is too strict, or because your
  device is not officially supported yet but you know it is compatible.
* You are developing a driver on a test board, where you soldered the I2C
  device yourself.

This interface is a replacement for the force_* module parameters some I2C
drivers implement. Being implemented in i2c-core rather than in each
device driver individually, it is much more efficient, and also has the
advantage that you do not have to reload the driver to change a setting.
You can also instantiate the device before the driver is loaded or even
available, and you don't need to know what driver the device needs.

Internal input event handling in the Linux kernel and the Android userspace

This post i found in Internet 
While figuring out hardware buttons for my NITDroid project, I had the opportunity of exploring the way Linux and Android handle input events internally before passing them through to the user application. This post traces the propagation of an input event from the Linux kernel through the Android user space as far as I understand it. Although the principles are likely the same for essentially any input device, I will be drawing on my investigations of the drivers for the LM8323 hardware keyboard (drivers/input/keyboard/lm8323.c) and the TSC2005 touchscreen (drivers/input/touchscreen/tsc2005.c) which are both found inside the Nokia N810.
I. Inside the Linux kernel
Firstly, Linux exposes externally a uniform input event interface for each device as /dev/input/eventX where X is an integer. This means these "devices" can be polled in the same way and the events they produce are in the same uniform format. To accomplish this, Linux has a standard set of routines that every device driver uses to register / unregister the hardware it manages and publish input events it receives.
When the driver module of an input device is first loaded into the kernel, its initialization routine usually sets up some sort of probing to detect the presence of the types of hardware it is supposed to manage. This probing is of course device-specific; however, if it is successful, the module will eventually invoke the function input_register_device(…) in include/linux/input.h which sets up a file representing the physical device as /dev/input/eventX where X is some integer. The module will also register a function to handle IRQs originating from the hardware it manages via request_irq(…) (include/linux/interrupt.h) so that the module will be notified when the user interacts with the physical device it manages.
When the user physically interacts with the hardware (for instance by pushing / releasing a key or exerting / lifting pressure on the touchscreen), an IRQ is fired and Linux invokes the IRQ handler registered by the corresponding device driver. However, IRQ handlers by custom must return quickly as they essentially block the entire system when executing and thus cannot perform any lengthy processing; typically, therefore, an IRQ handler would merely 1) save the data carried by the IRQ, 2) ask the kernel to schedule a method that would process the event later on when we have exited IRQ mode, and 3) tell the kernel we have handled the IRQ and exit. This could be very straightforward, as the IRQ handler in the driver for the LM8323 keyboard inside the N810:
 * We cannot use I2C in interrupt context, so we just schedule work.
static irqreturn_t lm8323_irq(int irq, void *data)
        struct lm8323_chip *lm = data;


        return IRQ_HANDLED;
It could also be more complex as the one in the driver of the TSC2005 touchscreen controller (tsc2005_ts_irq_handler(…)) as it integrates into the SPI framework (which I have never looked into…).
Some time later, the kernel executes the scheduled method to process the recently saved event. Invariably, this method would report the event in a standard format by calling one or more of the input_* functions in include/linux/input.h; these include input_event(…) (general purpose), input_report_key(…) (for key down and key up events), input_report_abs(…) (for position events e.g. from a touchscreen) among others. Note that the input_report_*(…) functions are really just convenience functions that call input_event(…) internally, as defined in include/linux/input.h. It is likely that a lot of processing happens before the event is published via these methods; the LM8323 driver for instance does an internal key code mapping step and the TSC2005 driver goes through this crazy arithmetic involving Ohms (to calculate a pressure index from resistance data?). Furthermore, one physical IRQ could correspond to multiple published input events, and vice versa. Finally, when all event publishing is finished, the event processing method calls input_sync(…) to flush the event out. The event is now ready to be accessed by the userspace at /dev/input/eventX.
II. Inside the Android userspace
When the Android GUI starts up, an instance of the class WindowManagerService (frameworks/base/services/java/com/android/server/ is created. This class, when constructed, initializes the member field
final KeyQ mQueue;
where KeyQ, defined as a private class inside the same file, extends Android’s basic input handling class, the abstract class KeyInputQueue (frameworks/base/services/java/com/android/server/ and frameworks/base/services/jni/com_android_server_KeyInputQueue.cpp). As mQueue is instantiated, it of course calls the constructor of KeyInputQueue; the latter, inconspicuously, starts an anonymous thread it owns that is at the heart of the event handling system in Android:
Thread mThread = new Thread("InputDeviceReader") {
    public void run() {
        RawInputEvent ev = new RawInputEvent();
        while (true) {
            try {
                readEvent(ev);  // block, doesn't release the monitor

                boolean send = false;
                if (ev.type == RawInputEvent.EV_DEVICE_ADDED) {
                } else if (ev.type == RawInputEvent.EV_DEVICE_REMOVED) {
                } else {
                    di = getInputDevice(ev.deviceId);
                    // first crack at it
                    send = preprocessEvent(di, ev);
                if (!send) {
                synchronized (mFirst) {
                    // Is it a key event?
                    if (type == RawInputEvent.EV_KEY &&
                            (classes&RawInputEvent.CLASS_KEYBOARD) != 0 &&
                            (scancode < RawInputEvent.BTN_FIRST ||
                                    scancode > RawInputEvent.BTN_LAST)) {
                        boolean down;
                        if (ev.value != 0) {
                            down = true;
                            di.mKeyDownTime = curTime;
                        } else {
                            down = false;
                        int keycode = rotateKeyCodeLocked(ev.keycode);
                        addLocked(di, curTimeNano, ev.flags,
                                newKeyEvent(di, di.mKeyDownTime, curTime, down,
                                        keycode, 0, scancode,
                                        ((ev.flags & WindowManagerPolicy.FLAG_WOKE_HERE) != 0)
                                         ? KeyEvent.FLAG_WOKE_HERE : 0));
                    } else if (ev.type == RawInputEvent.EV_KEY) {
                    } else if (ev.type == RawInputEvent.EV_ABS &&
                            (classes&RawInputEvent.CLASS_TOUCHSCREEN_MT) != 0) {
                        // Process position events from multitouch protocol.
                    } else if (ev.type == RawInputEvent.EV_ABS &&
                            (classes&RawInputEvent.CLASS_TOUCHSCREEN) != 0) {
                        // Process position events from single touch protocol.
                    } else if (ev.type == RawInputEvent.EV_REL &&
                            (classes&RawInputEvent.CLASS_TRACKBALL) != 0) {
                        // Process movement events from trackball (mouse) protocol.

            } catch (RuntimeException exc) {
                Slog.e(TAG, "InputReaderThread uncaught exception", exc);
I have removed most of this ~350 lined function that is irrelevant to our discussion and reformatted the code for easier reading. The key idea is that this independent thread will
  1. Read an event
  2. Call the preprocess(…) method of its derived class, offering the latter a chance to prevent the event from being propagated further
3.Add it to the event queue owned by the class
This InputDeviceReader thread started by WindowManagerService (indirectly via KeyInputQueue’s constructor) is thus THE event loop of the Android UI.
But we are still missing the link from the kernel to this InputDeviceReader. What exactly is this magical readEvent(…)? It turns out that this is actually a native method implemented in the C++ half of KeyInputQueue:
static Mutex gLock;
static sp<EventHub> gHub;

static jboolean
android_server_KeyInputQueue_readEvent(JNIEnv* env, jobject clazz,
                                          jobject event)
    sp<EventHub> hub = gHub;
    if (hub == NULL) {
        hub = new EventHub;
        gHub = hub;

    bool res = hub->getEvent(&deviceId, &type, &scancode, &keycode,
            &flags, &value, &when);

    return res;
Ah, so readEvent is really just a proxy for EventHub::getEvent(…). If we proceed to look up EventHub in frameworks/base/libs/ui/EventHub.cpp, we find
int EventHub::scan_dir(const char *dirname)
    dir = opendir(dirname);
    while((de = readdir(dir))) {
    return 0;

static const char *device_path = "/dev/input";

bool EventHub::openPlatformInput(void)
    res = scan_dir(device_path);
    return true;

bool EventHub::getEvent(int32_t* outDeviceId, int32_t* outType,
        int32_t* outScancode, int32_t* outKeycode, uint32_t *outFlags,
        int32_t* outValue, nsecs_t* outWhen)
    if (!mOpened) {
        mError = openPlatformInput() ? NO_ERROR : UNKNOWN_ERROR;
        mOpened = true;

    while(1) {
        // First, report any devices that had last been added/removed.
        if (mClosingDevices != NULL) {
            *outType = DEVICE_REMOVED;
            delete device;
            return true;
        if (mOpeningDevices != NULL) {
            *outType = DEVICE_ADDED;
            return true;

        pollres = poll(mFDs, mFDCount, -1);

        // mFDs[0] is used for inotify, so process regular events starting at mFDs[1]
        for(i = 1; i < mFDCount; i++) {
            if(mFDs[i].revents) {
                if(mFDs[i].revents & POLLIN) {
                    res = read(mFDs[i].fd, &iev, sizeof(iev));
                    if (res == sizeof(iev)) {
                        *outType = iev.type;
                        *outScancode = iev.code;
                        if (iev.type == EV_KEY) {
                            err = mDevices[i]->layoutMap->map(iev.code, outKeycode, outFlags);
                        } else {
                            *outKeycode = iev.code;
                        return true;
                    } else {
                        // Error handling
Again, most of the details have been stripped out from the above code, but we now see how readEvent() in KeyInputQueue is getting these events from Linux: on first call, EventHub::getEvent scans the directory /dev/input for input devices, opens them and saves their file descriptors in an array called mFDs. Then whenever it is called again, it tries to read from each of these input devices by simply calling the read(2) Linux system call.
OK, now we know how an event propagates through EventHub::getEvent(…) to KeyInputQueue::readEvent(…) then to…) where it could get queued inside WindowManagerService.mQueue (which, as a reminder, extends the otherwise abstract KeyInputQueue). But what happens then? How does that event get to the client application?
Well, it turns out that WindowManagerService has yet another private member class that handles just that:
private final class InputDispatcherThread extends Thread {
    @Override public void run() {
        while (true) {
            try {
            } catch (Exception e) {
                Slog.e(TAG, "Exception in input dispatcher", e);

    private void process() {
        while (true) {
            // Retrieve next event, waiting only as long as the next
            // repeat timeout.  If the configuration has changed, then
            // don't wait at all -- we'll report the change as soon as
            // we have processed all events.
            QueuedEvent ev = mQueue.getEvent(
                (int)((!configChanged && curTime < nextKeyTime)
                        ? (nextKeyTime-curTime) : 0));
            try {
                if (ev != null) {
                    curTime = SystemClock.uptimeMillis();
                    int eventType;
                    if (ev.classType == RawInputEvent.CLASS_TOUCHSCREEN) {
                        eventType = eventType((MotionEvent)ev.event);
                    } else if (ev.classType == RawInputEvent.CLASS_KEYBOARD ||
                                ev.classType == RawInputEvent.CLASS_TRACKBALL) {
                        eventType = LocalPowerManager.BUTTON_EVENT;
                    } else {
                        eventType = LocalPowerManager.OTHER_EVENT;
                    switch (ev.classType) {
                        case RawInputEvent.CLASS_KEYBOARD:
                            KeyEvent ke = (KeyEvent)ev.event;
                            if (ke.isDown()) {
                                lastKey = ke;
                                downTime = curTime;
                                keyRepeatCount = 0;
                                lastKeyTime = curTime;
                                nextKeyTime = lastKeyTime
                                        + ViewConfiguration.getLongPressTimeout();
                            } else {
                                lastKey = null;
                                downTime = 0;
                                // Arbitrary long timeout.
                                lastKeyTime = curTime;
                                nextKeyTime = curTime + LONG_WAIT;
                            dispatchKey((KeyEvent)ev.event, 0, 0);
                        case RawInputEvent.CLASS_TOUCHSCREEN:
                            dispatchPointer(ev, (MotionEvent)ev.event, 0, 0);
                        case RawInputEvent.CLASS_TRACKBALL:
                            dispatchTrackball(ev, (MotionEvent)ev.event, 0, 0);
                        case RawInputEvent.CLASS_CONFIGURATION_CHANGED:
                            configChanged = true;
                } else if (configChanged) {
                } else if (lastKey != null) {
                } else {
            } catch (Exception e) {
                    "Input thread received uncaught exception: " + e, e);
As we can see, this thread started by WindowManagerService is very simple; all it does is
  1. Grabs events queued into WindowManagerService.mQueue
  2. Calls WindowManagerService.dispatchKey(…) when appropriate.
If we next inspect WindowManagerService.dispatchKey(…), we would see that it checks the currently focused window, and calls android.view.IWindow.dispatchKey(…) on that window. The event is now in the user space.

Monday, 25 November 2013

Robotium for Testing Android Application


        1) Android application apk file for Testing. 
             Ex: ApplicationToTest.apk
        2) Eclipse for building Test project
        3) ADT (Android Development Tools)
        4) SDK (Software Development Kit)
        5) JDK (Java Development Kit)
        6) robotium-­‐solo-­‐1.7.1.jar

       Prerequisites for creating test project:     
        * Install eclipse, ADT, SDK, JDK to your system.  
        * After installation give proper path in environmental variable

                 [For more help go to:       /index.html ]
                [To download the robotium-­‐solo-­‐1.7.1.jar and Javadoc: ]

NOTE: In this example the application apk file has the following package name: “com.Example.ApplicationToTest” and the apk name is: ApplicationToTest.apk

             Create the test project by:
             File -> New -> Project -> Android -> Android Test Project
             The window below will open:

     Fill all the following fields to create the test project:

       * Test Project Name: ExampleApplicationTesting

       * Test Target: Click on “This Project “

       * Build Target: If the application was developed using SDK version7          then select Android 2.1 – update1. If it was developed by SDK version 8 then select Android 2.2

       * Properties: Application name: ApplicationTesting

       * Package name: com.Example.ApplicationTesting

       * Min SDK version: Default value will be there according to Build Target selection
      Then click on “finish”
      A new project with the name: ExampleApplicationTesting is created.

   STEP 2: DO THE FOLLOWING CHANGES IN “AndroidManifest.xml” 

Open package “ExampleApplicationTesting” there you will find the file AndroidManifest.xml

     Open the AndroidManifest.xml


<instrumentation android:targetPackage="com.Example.ApplicationTesting"
<instrumentation android:targetPackage="com.Example.ApplicationToTest"

    If you do not know the exact package name then type this in the DOS prompt
> launch the emulator
> adb install testapplication.apk
> adb logcat
Run the application once and you will get the exact package name

Select the package and right click it and select: New -> Class

       Use the class name: ExampleTest and click on “finish”
    Now the editor should look like:
   Copy this code into the editor:


        import android.test.ActivityInstrumentationTestCase2;

        public class ExampleTest extends ActivityInstrumentationTestCase2 {

        private static final String TARGET_PACKAGE_ID = "                         com.Example.ApplicationToTest ";
       private static final String LAUNCHER_ACTIVITY_FULL_CLASSNAME = "

       private static Class<?> launcherActivityClass;
       try {
       launcherActivityClass =                      Class.forName(LAUNCHER_ACTIVITY_FULL_CLASSNAME);
      } catch (ClassNotFoundException e) {
     throw new RuntimeException(e);

         public ExampleTest() throws ClassNotFoundException {
         super(TARGET_PACKAGE_ID, launcherActivityClass);

        private Solo solo;

        protected void setUp() throws Exception {
        solo = new Solo(getInstrumentation(), getActivity());

        public void testCanOpenSettings(){


       public void tearDown() throws Exception {

       try {
       } catch (Throwable e) {


private static final String TARGET_PACKAGE_ID = " com.Example.ApplicationToTest ";
private static final String LAUNCHER_ACTIVITY_FULL_CLASSNAME = "

In this example the " com.Example.ApplicationToTest “ is the package name.
“MainMenuSettings” is the launcher activity name. It should look like this:
private static final String LAUNCHER_ACTIVITY_FULL_CLASSNAME =packagename.launchername

If you do not know the exact package and launcher names follow these steps in the DOS prompt
> launch the emulator
> adb install testapplication.apk
> adb logcat
The exact package name and launcher name will be printed

Add the latest version of the robotium jar file to the project.

Right click on “ExampleApplicationTesting” project -> Build path -> Configure Build Path

     Then select Add External Jars -> select robotium jar file -> Open -> OK


STEP 4: The apk file has to have the same certificate signature that your test project has 


The signature will identify the author of the android application. Signature means it contains the
information like first name and last name of the developer, Name of the organizational unit,
organization, city, state, two-­‐letter country code.

Standard tools like Keytool and Jarsigner are used to generate keys and sign applications.

[For more help:­‐signing.html ]

     * If you know the certificate signature then you need to use the same       signature in your test project
     * If you do not know the certificate signature then you need to delete the certificate signature and you should use the same android debug key signature in both the application and the test project
     * If the application is unsigned then you need to sign the application apk with the android debug key

      If the application is signed then you can use the following bash script:
             -­‐-­‐ Un-­‐zip the apk file
              -­‐-­‐ Delete the META-­‐INF folder
             -­‐-­‐ Re-­‐zip the apk file
             -­‐-­‐ In Dos prompt /Command prompt
      > jarsigner -keystore ~/.android/debug.keystore -storepass android -keypass android ApplicationToTest.apk androiddebugkey
      > zipalign 4 ApplicationToTest.apk TempApplicationToTest.apk
Then rename TempApplicationToTest.apk to ApplicationToTest.apk

If it is an unsigned application then: 
-­‐-­‐ In Dos prompt /Command prompt

    > jarsigner -keystore ~/.android/debug.keystore -storepass android -keypass android ApplicationToTest.apk androiddebugkey
     > zipalign 4 ApplicationToTest.apk TempApplicationToTest.apk
Then rename TempApplicationToTest.apk to ApplicationToTest.apk
[For more help:­‐signing.html }


Right click on the test project -> Run As -> Android JUnit Test


      * Use adb to Install the application apk:
                  > adb install ApplicationToTest.apk

      * Use adb to install the test project apk:
                  > adb install ExampleTesting.apk

      * Run the test cases:
                  > adb shell am instrument -­w com.Example.ApplicationTesting/android.test.InstrumentationTestRunner

Thursday, 21 November 2013

Embedded system Characteristics

Embedded system Characteristics

Embedded systems are designed to do some specific task, rather than be a general-purpose computer for multiple tasks. Some also have real-time performance constraints that must be met, for reasons such as safety and usability; others may have low or no performance requirements, allowing the system hardware to be simplified to reduce costs.
Embedded systems are not always standalone devices. Many embedded systems consist of small, computerized parts within a larger device that serves a more general purpose. For example, the Gibson Robot Guitar features an embedded system for tuning the strings, but the overall purpose of the Robot Guitar is, of course, to play music. Similarly, an embedded system in an automobile provides a specific function as a subsystem of the car itself.

User interface

The program instructions written for embedded systems are referred to as firmware, and are stored in read-only memory or Flash memorychips. They run with limited computer hardware resources: little memory, small or non-existent keyboard or screen.

Embedded systems range from no user interface at all — dedicated only to one task — to complex graphical user interfaces that resemble modern computer desktop operating systems. Simple embedded devices use buttonsLEDs, graphic or character LCDs (for example popularHD44780 LCD) with a simple menu system.
More sophisticated devices which use a graphical screen with touch sensing or screen-edge buttons provide flexibility while minimizing space used: the meaning of the buttons can change with the screen, and selection involves the natural behavior of pointing at what's desired.Handheld systems often have a screen with a "joystick button" for a pointing device.
Some systems provide user interface remotely with the help of a serial (e.g. RS-232USBI²C, etc.) or network (e.g. Ethernet) connection. This approach gives several advantages: extends the capabilities of embedded system, avoids the cost of a display, simplifies BSP, allows us to build rich user interface on the PC. A good example of this is the combination of an embedded web server running on an embedded device (such as an IP camera) or a network routers. The user interface is displayed in a web browser on a PC connected to the device, therefore needing no bespoke software to be installed.

Processors in embedded systems

Embedded processors can be broken into two broad categories. Ordinary microprocessors (μP) use separate integrated circuits for memory and peripherals. Microcontrollers (μC) have many more peripherals on chip, reducing power consumption, size and cost. In contrast to the personal computer market, many different basic CPU architectures are used, since software is custom-developed for an application and is not a commodity product installed by the end user. Both Von Neumann as well as various degrees of Harvard architectures are used. RISC as well as non-RISC processors are found. Word lengths vary from 4-bit to 64-bits and beyond, although the most typical remain 8/16-bit. Most architectures come in a large number of different variants and shapes, many of which are also manufactured by several different companies.

Numerous microcontrollers have been developed for embedded systems use. General-purpose microprocessors are also used in embedded systems, but generally require more support circuitry than microcontrollers.

Ready made computer boards

PC/104 and PC/104+ are examples of standards for ready made computer boards intended for small, low-volume embedded and ruggedized systems, mostly x86-based. These are often physically small compared to a standard PC, although still quite large compared to most simple (8/16-bit) embedded systems. They often use MSDOSLinuxNetBSD, or an embedded real-time operating system such as MicroC/OS-II,QNX or VxWorks. Sometimes these boards use non-x86 processors.

In certain applications, where small size or power efficiency are not primary concerns, the components used may be compatible with those used in general purpose x86 personal computers. Boards such as the VIA EPIA range help to bridge the gap by being PC-compatible but highly integrated, physically smaller or have other attributes making them attractive to embedded engineers. The advantage of this approach is that low-cost commodity components may be used along with the same software development tools used for general software development. Systems built in this way are still regarded as embedded since they are integrated into larger devices and fulfill a single role. Examples of devices that may adopt this approach are ATMs and arcade machines, which contain code specific to the application.
However, most ready-made embedded systems boards are not PC-centered and do not use the ISA or PCI busses. When a System-on-a-chip processor is involved, there may be little benefit to having a standarized bus connecting discrete components, and the environment for both hardware and software tools may be very different.
One common design style uses a small system module, perhaps the size of a business card, holding high density BGA chips such as an ARM-based System-on-a-chip processor and peripherals, external flash memory for storage, and DRAM for runtime memory. The module vendor will usually provide boot software and make sure there is a selection of operating systems, usually including Linux and some real time choices. These modules can be manufactured in high volume, by organizations familiar with their specialized testing issues, and combined with much lower volume custom mainboards with application-specific external peripherals.

ASIC and FPGA solutions

A common array of n configuration for very-high-volume embedded systems is the system on a chip (SoC) which contains a complete system consisting of multiple processors, multipliers, caches and interfaces on a single chip. SoCs can be implemented as an application-specific integrated circuit (ASIC) or using a field-programmable gate array (FPGA).


Embedded Systems talk with the outside world via peripherals, such as:


As with other software, embedded system designers use compilersassemblers, and debuggers to develop embedded system software. However, they may also use some more specific tools:

  • In circuit debuggers or emulators (see next section).
  • Utilities to add a checksum or CRC to a program, so the embedded system can check if the program is valid.
  • For systems using digital signal processing, developers may use a math workbench such as Scilab / ScicosMATLAB / SimulinkEICASLABMathCadMathematica,or FlowStone DSP to simulate the mathematics. They might also use libraries for both the host and target which eliminates developing DSP routines as done in DSPnano RTOS and Unison Operating System.
  • A model based development tool like VisSim lets you create and simulate graphical data flow and UML State chart diagrams of components like digital filters, motor controllers, communication protocol decoding and multi-rate tasks. Interrupt handlers can also be created graphically. After simulation, you can automatically generate C-code to the VisSimRTOS which handles the main control task and preemption of background tasks, as well as automatic setup and programming of on-chip peripherals.
  • Custom compilers and linkers may be used to optimize specialized hardware.
  • An embedded system may have its own special language or design tool, or add enhancements to an existing language such as Forth or Basic.
  • Another alternative is to add a real-time operating system or embedded operating system, which may have DSP capabilities like DSPnano RTOS.
  • Modeling and code generating tools often based on state machines
Software tools can come from several sources:
  • Software companies that specialize in the embedded market
  • Ported from the GNU software development tools
  • Sometimes, development tools for a personal computer can be used if the embedded processor is a close relative to a common PC processor
As the complexity of embedded systems grows, higher level tools and operating systems are migrating into machinery where it makes sense. For example, cellphonespersonal digital assistants and other consumer computers often need significant software that is purchased or provided by a person other than the manufacturer of the electronics. In these systems, an open programming environment such as LinuxNetBSDOSGi or Embedded Java is required so that the third-party software provider can sell to a large market.


Embedded debugging may be performed at different levels, depending on the facilities available. From simplest to most sophisticated they can be roughly grouped into the following areas:

  • Interactive resident debugging, using the simple shell provided by the embedded operating system (e.g. Forth and Basic)
  • External debugging using logging or serial port output to trace operation using either a monitor in flash or using a debug server like the Remedy Debugger which even works for heterogeneous multicore systems.
  • An in-circuit debugger (ICD), a hardware device that connects to the microprocessor via a JTAG or Nexus interface. This allows the operation of the microprocessor to be controlled externally, but is typically restricted to specific debugging capabilities in the processor.
  • An in-circuit emulator (ICE) replaces the microprocessor with a simulated equivalent, providing full control over all aspects of the microprocessor.
  • A complete emulator provides a simulation of all aspects of the hardware, allowing all of it to be controlled and modified, and allowing debugging on a normal PC. The downsides are expense and slow operation, in some cases up to 100X slower than the final system.
  • For SoC designs, the typical approach is to verify and debug the design on an FPGA prototype board. Tools such as Certus are used to insert probes in the FPGA RTL that make signals available for observation. This is used to debug hardware, firmware and software interactions across multiple FPGA with capabilities similar to a logic analyzer.
Unless restricted to external debugging, the programmer can typically load and run software through the tools, view the code running in the processor, and start or stop its operation. The view of the code may be as HLL source-codeassembly code or mixture of both.
Because an embedded system is often composed of a wide variety of elements, the debugging strategy may vary. For instance, debugging a software- (and microprocessor-) centric embedded system is different from debugging an embedded system where most of the processing is performed by peripherals (DSP, FPGA, co-processor). An increasing number of embedded systems today use more than one single processor core. A common problem with multi-core development is the proper synchronization of software execution. In such a case, the embedded system design may wish to check the data traffic on the busses between the processor cores, which requires very low-level debugging, at signal/bus level, with a logic analyzer, for instance.


Real-time operating systems (RTOS) often supports tracing of operating system events. A graphical view is presented by a host PC tool, based on a recording of the system behavior. The trace recording can be performed in software, by the RTOS, or by special tracing hardware. RTOS tracing allows developers to understand timing and performance issues of the software system and gives a good understanding of the high-level system behavior. Commercial tools like RTXC Quadros or IAR Systems exist.


Embedded systems often reside in machines that are expected to run continuously for years without errors, and in some cases recover by themselves if an error occurs. Therefore the software is usually developed and tested more carefully than that for personal computers, and unreliable mechanical moving parts such as disk drives, switches or buttons are avoided.

Specific reliability issues may include:
  • The system cannot safely be shut down for repair, or it is too inaccessible to repair. Examples include space systems, undersea cables, navigational beacons, bore-hole systems, and automobiles.
  • The system must be kept running for safety reasons. "Limp modes" are less tolerable. Often backups are selected by an operator. Examples include aircraft navigation, reactor control systems, safety-critical chemical factory controls, train signals.
  • The system will lose large amounts of money when shut down: Telephone switches, factory controls, bridge and elevator controls, funds transfer and market making, automated sales and service.
A variety of techniques are used, sometimes in combination, to recover from errors—both software bugs such as memory leaks, and also soft errors in the hardware:
  • watchdog timer that resets the computer unless the software periodically notifies the watchdog
  • subsystems with redundant spares that can be switched over to
  • software "limp modes" that provide partial function
  • Designing with a Trusted Computing Base (TCB) architecture ensures a highly secure & reliable system environment
  • Hypervisor designed for embedded systems, is able to provide secure encapsulation for any subsystem component, so that a compromised software component cannot interfere with other subsystems, or privileged-level system software. This encapsulation keeps faults from propagating from one subsystem to another, improving reliability. This may also allow a subsystem to be automatically shut down and restarted on fault detection.
  • Immunity Aware Programming

High vs low volume

For high volume systems such as portable music players or mobile phones, minimizing cost is usually the primary design consideration. Engineers typically select hardware that is just “good enough” to implement the necessary functions.

For low-volume or prototype embedded systems, general purpose computers may be adapted by limiting the programs or by replacing the operating system with a real-time operating system.