Sunday, 1 December 2013

Important meanings

mention (a number of things) one by one.

How to instantiate I2C devices

How to instantiate I2C devices

Unlike PCI or USB devices, I2C devices are not enumerated at the hardware
level. Instead, the software must know which devices are connected on each
I2C bus segment, and what address these devices are using. For this
reason, the kernel code must instantiate I2C devices explicitly. There are
several ways to achieve this, depending on the context and requirements.

Method 1: Declare the I2C devices by bus number

This method is appropriate when the I2C bus is a system bus as is the case
for many embedded systems. On such systems, each I2C bus has a number
which is known in advance. It is thus possible to pre-declare the I2C
devices which live on this bus. This is done with an array of struct
i2c_board_info which is registered by calling i2c_register_board_info().

Example (from omap2 h4):

static struct i2c_board_info h4_i2c_board_info[] __initdata = {
  I2C_BOARD_INFO("isp1301_omap", 0x2d),
  .irq  = OMAP_GPIO_IRQ(125),
 { /* EEPROM on mainboard */
  I2C_BOARD_INFO("24c01", 0x52),
  .platform_data = &m24c01,
 { /* EEPROM on cpu card */
  I2C_BOARD_INFO("24c01", 0x57),
  .platform_data = &m24c01,

static void __init omap_h4_init(void)
 i2c_register_board_info(1, h4_i2c_board_info,

The above code declares 3 devices on I2C bus 1, including their respective
addresses and custom data needed by their drivers. When the I2C bus in
question is registered, the I2C devices will be instantiated automatically
by i2c-core.

The devices will be automatically unbound and destroyed when the I2C bus
they sit on goes away (if ever.)

Method 2: Instantiate the devices explicitly

This method is appropriate when a larger device uses an I2C bus for
internal communication. A typical case is TV adapters. These can have a
tuner, a video decoder, an audio decoder, etc. usually connected to the
main chip by the means of an I2C bus. You won't know the number of the I2C
bus in advance, so the method 1 described above can't be used. Instead,
you can instantiate your I2C devices explicitly. This is done by filling
a struct i2c_board_info and calling i2c_new_device().

Example (from the sfe4001 network driver):

static struct i2c_board_info sfe4001_hwmon_info = {
 I2C_BOARD_INFO("max6647", 0x4e),

int sfe4001_init(struct efx_nic *efx)
 efx->board_info.hwmon_client =
  i2c_new_device(&efx->i2c_adap, &sfe4001_hwmon_info);


The above code instantiates 1 I2C device on the I2C bus which is on the
network adapter in question.

A variant of this is when you don't know for sure if an I2C device is
present or not (for example for an optional feature which is not present
on cheap variants of a board but you have no way to tell them apart), or
it may have different addresses from one board to the next (manufacturer
changing its design without notice). In this case, you can call
i2c_new_probed_device() instead of i2c_new_device().

Example (from the nxp OHCI driver):

static const unsigned short normal_i2c[] = { 0x2c, 0x2d, I2C_CLIENT_END };

static int usb_hcd_nxp_probe(struct platform_device *pdev)
 struct i2c_adapter *i2c_adap;
 struct i2c_board_info i2c_info;

 i2c_adap = i2c_get_adapter(2);
 memset(&i2c_info, 0, sizeof(struct i2c_board_info));
 strlcpy(i2c_info.type, "isp1301_nxp", I2C_NAME_SIZE);
 isp1301_i2c_client = i2c_new_probed_device(i2c_adap, &i2c_info,
         normal_i2c, NULL);

The above code instantiates up to 1 I2C device on the I2C bus which is on
the OHCI adapter in question. It first tries at address 0x2c, if nothing
is found there it tries address 0x2d, and if still nothing is found, it
simply gives up.

The driver which instantiated the I2C device is responsible for destroying
it on cleanup. This is done by calling i2c_unregister_device() on the
pointer that was earlier returned by i2c_new_device() or

Method 3: Probe an I2C bus for certain devices

Sometimes you do not have enough information about an I2C device, not even
to call i2c_new_probed_device(). The typical case is hardware monitoring
chips on PC mainboards. There are several dozen models, which can live
at 25 different addresses. Given the huge number of mainboards out there,
it is next to impossible to build an exhaustive list of the hardware
monitoring chips being used. Fortunately, most of these chips have
manufacturer and device ID registers, so they can be identified by

In that case, I2C devices are neither declared nor instantiated
explicitly. Instead, i2c-core will probe for such devices as soon as their
drivers are loaded, and if any is found, an I2C device will be
instantiated automatically. In order to prevent any misbehavior of this
mechanism, the following restrictions apply:
* The I2C device driver must implement the detect() method, which
  identifies a supported device by reading from arbitrary registers.
* Only buses which are likely to have a supported device and agree to be
  probed, will be probed. For example this avoids probing for hardware
  monitoring chips on a TV adapter.

See lm90_driver and lm90_detect() in drivers/hwmon/lm90.c

I2C devices instantiated as a result of such a successful probe will be
destroyed automatically when the driver which detected them is removed,
or when the underlying I2C bus is itself destroyed, whichever happens

Those of you familiar with the i2c subsystem of 2.4 kernels and early 2.6
kernels will find out that this method 3 is essentially similar to what
was done there. Two significant differences are:
* Probing is only one way to instantiate I2C devices now, while it was the
  only way back then. Where possible, methods 1 and 2 should be preferred.
  Method 3 should only be used when there is no other way, as it can have
  undesirable side effects.
* I2C buses must now explicitly say which I2C driver classes can probe
  them (by the means of the class bitfield), while all I2C buses were
  probed by default back then. The default is an empty class which means
  that no probing happens. The purpose of the class bitfield is to limit
  the aforementioned undesirable side effects.

Once again, method 3 should be avoided wherever possible. Explicit device
instantiation (methods 1 and 2) is much preferred for it is safer and

Method 4: Instantiate from user-space

In general, the kernel should know which I2C devices are connected and
what addresses they live at. However, in certain cases, it does not, so a
sysfs interface was added to let the user provide the information. This
interface is made of 2 attribute files which are created in every I2C bus
directory: new_device and delete_device. Both files are write only and you
must write the right parameters to them in order to properly instantiate,
respectively delete, an I2C device.

File new_device takes 2 parameters: the name of the I2C device (a string)
and the address of the I2C device (a number, typically expressed in
hexadecimal starting with 0x, but can also be expressed in decimal.)

File delete_device takes a single parameter: the address of the I2C
device. As no two devices can live at the same address on a given I2C
segment, the address is sufficient to uniquely identify the device to be

# echo eeprom 0x50 > /sys/bus/i2c/devices/i2c-3/new_device

While this interface should only be used when in-kernel device declaration
can't be done, there is a variety of cases where it can be helpful:
* The I2C driver usually detects devices (method 3 above) but the bus
  segment your device lives on doesn't have the proper class bit set and
  thus detection doesn't trigger.
* The I2C driver usually detects devices, but your device lives at an
  unexpected address.
* The I2C driver usually detects devices, but your device is not detected,
  either because the detection routine is too strict, or because your
  device is not officially supported yet but you know it is compatible.
* You are developing a driver on a test board, where you soldered the I2C
  device yourself.

This interface is a replacement for the force_* module parameters some I2C
drivers implement. Being implemented in i2c-core rather than in each
device driver individually, it is much more efficient, and also has the
advantage that you do not have to reload the driver to change a setting.
You can also instantiate the device before the driver is loaded or even
available, and you don't need to know what driver the device needs.

Internal input event handling in the Linux kernel and the Android userspace

This post i found in Internet 
While figuring out hardware buttons for my NITDroid project, I had the opportunity of exploring the way Linux and Android handle input events internally before passing them through to the user application. This post traces the propagation of an input event from the Linux kernel through the Android user space as far as I understand it. Although the principles are likely the same for essentially any input device, I will be drawing on my investigations of the drivers for the LM8323 hardware keyboard (drivers/input/keyboard/lm8323.c) and the TSC2005 touchscreen (drivers/input/touchscreen/tsc2005.c) which are both found inside the Nokia N810.
I. Inside the Linux kernel
Firstly, Linux exposes externally a uniform input event interface for each device as /dev/input/eventX where X is an integer. This means these "devices" can be polled in the same way and the events they produce are in the same uniform format. To accomplish this, Linux has a standard set of routines that every device driver uses to register / unregister the hardware it manages and publish input events it receives.
When the driver module of an input device is first loaded into the kernel, its initialization routine usually sets up some sort of probing to detect the presence of the types of hardware it is supposed to manage. This probing is of course device-specific; however, if it is successful, the module will eventually invoke the function input_register_device(…) in include/linux/input.h which sets up a file representing the physical device as /dev/input/eventX where X is some integer. The module will also register a function to handle IRQs originating from the hardware it manages via request_irq(…) (include/linux/interrupt.h) so that the module will be notified when the user interacts with the physical device it manages.
When the user physically interacts with the hardware (for instance by pushing / releasing a key or exerting / lifting pressure on the touchscreen), an IRQ is fired and Linux invokes the IRQ handler registered by the corresponding device driver. However, IRQ handlers by custom must return quickly as they essentially block the entire system when executing and thus cannot perform any lengthy processing; typically, therefore, an IRQ handler would merely 1) save the data carried by the IRQ, 2) ask the kernel to schedule a method that would process the event later on when we have exited IRQ mode, and 3) tell the kernel we have handled the IRQ and exit. This could be very straightforward, as the IRQ handler in the driver for the LM8323 keyboard inside the N810:
 * We cannot use I2C in interrupt context, so we just schedule work.
static irqreturn_t lm8323_irq(int irq, void *data)
        struct lm8323_chip *lm = data;


        return IRQ_HANDLED;
It could also be more complex as the one in the driver of the TSC2005 touchscreen controller (tsc2005_ts_irq_handler(…)) as it integrates into the SPI framework (which I have never looked into…).
Some time later, the kernel executes the scheduled method to process the recently saved event. Invariably, this method would report the event in a standard format by calling one or more of the input_* functions in include/linux/input.h; these include input_event(…) (general purpose), input_report_key(…) (for key down and key up events), input_report_abs(…) (for position events e.g. from a touchscreen) among others. Note that the input_report_*(…) functions are really just convenience functions that call input_event(…) internally, as defined in include/linux/input.h. It is likely that a lot of processing happens before the event is published via these methods; the LM8323 driver for instance does an internal key code mapping step and the TSC2005 driver goes through this crazy arithmetic involving Ohms (to calculate a pressure index from resistance data?). Furthermore, one physical IRQ could correspond to multiple published input events, and vice versa. Finally, when all event publishing is finished, the event processing method calls input_sync(…) to flush the event out. The event is now ready to be accessed by the userspace at /dev/input/eventX.
II. Inside the Android userspace
When the Android GUI starts up, an instance of the class WindowManagerService (frameworks/base/services/java/com/android/server/ is created. This class, when constructed, initializes the member field
final KeyQ mQueue;
where KeyQ, defined as a private class inside the same file, extends Android’s basic input handling class, the abstract class KeyInputQueue (frameworks/base/services/java/com/android/server/ and frameworks/base/services/jni/com_android_server_KeyInputQueue.cpp). As mQueue is instantiated, it of course calls the constructor of KeyInputQueue; the latter, inconspicuously, starts an anonymous thread it owns that is at the heart of the event handling system in Android:
Thread mThread = new Thread("InputDeviceReader") {
    public void run() {
        RawInputEvent ev = new RawInputEvent();
        while (true) {
            try {
                readEvent(ev);  // block, doesn't release the monitor

                boolean send = false;
                if (ev.type == RawInputEvent.EV_DEVICE_ADDED) {
                } else if (ev.type == RawInputEvent.EV_DEVICE_REMOVED) {
                } else {
                    di = getInputDevice(ev.deviceId);
                    // first crack at it
                    send = preprocessEvent(di, ev);
                if (!send) {
                synchronized (mFirst) {
                    // Is it a key event?
                    if (type == RawInputEvent.EV_KEY &&
                            (classes&RawInputEvent.CLASS_KEYBOARD) != 0 &&
                            (scancode < RawInputEvent.BTN_FIRST ||
                                    scancode > RawInputEvent.BTN_LAST)) {
                        boolean down;
                        if (ev.value != 0) {
                            down = true;
                            di.mKeyDownTime = curTime;
                        } else {
                            down = false;
                        int keycode = rotateKeyCodeLocked(ev.keycode);
                        addLocked(di, curTimeNano, ev.flags,
                                newKeyEvent(di, di.mKeyDownTime, curTime, down,
                                        keycode, 0, scancode,
                                        ((ev.flags & WindowManagerPolicy.FLAG_WOKE_HERE) != 0)
                                         ? KeyEvent.FLAG_WOKE_HERE : 0));
                    } else if (ev.type == RawInputEvent.EV_KEY) {
                    } else if (ev.type == RawInputEvent.EV_ABS &&
                            (classes&RawInputEvent.CLASS_TOUCHSCREEN_MT) != 0) {
                        // Process position events from multitouch protocol.
                    } else if (ev.type == RawInputEvent.EV_ABS &&
                            (classes&RawInputEvent.CLASS_TOUCHSCREEN) != 0) {
                        // Process position events from single touch protocol.
                    } else if (ev.type == RawInputEvent.EV_REL &&
                            (classes&RawInputEvent.CLASS_TRACKBALL) != 0) {
                        // Process movement events from trackball (mouse) protocol.

            } catch (RuntimeException exc) {
                Slog.e(TAG, "InputReaderThread uncaught exception", exc);
I have removed most of this ~350 lined function that is irrelevant to our discussion and reformatted the code for easier reading. The key idea is that this independent thread will
  1. Read an event
  2. Call the preprocess(…) method of its derived class, offering the latter a chance to prevent the event from being propagated further
3.Add it to the event queue owned by the class
This InputDeviceReader thread started by WindowManagerService (indirectly via KeyInputQueue’s constructor) is thus THE event loop of the Android UI.
But we are still missing the link from the kernel to this InputDeviceReader. What exactly is this magical readEvent(…)? It turns out that this is actually a native method implemented in the C++ half of KeyInputQueue:
static Mutex gLock;
static sp<EventHub> gHub;

static jboolean
android_server_KeyInputQueue_readEvent(JNIEnv* env, jobject clazz,
                                          jobject event)
    sp<EventHub> hub = gHub;
    if (hub == NULL) {
        hub = new EventHub;
        gHub = hub;

    bool res = hub->getEvent(&deviceId, &type, &scancode, &keycode,
            &flags, &value, &when);

    return res;
Ah, so readEvent is really just a proxy for EventHub::getEvent(…). If we proceed to look up EventHub in frameworks/base/libs/ui/EventHub.cpp, we find
int EventHub::scan_dir(const char *dirname)
    dir = opendir(dirname);
    while((de = readdir(dir))) {
    return 0;

static const char *device_path = "/dev/input";

bool EventHub::openPlatformInput(void)
    res = scan_dir(device_path);
    return true;

bool EventHub::getEvent(int32_t* outDeviceId, int32_t* outType,
        int32_t* outScancode, int32_t* outKeycode, uint32_t *outFlags,
        int32_t* outValue, nsecs_t* outWhen)
    if (!mOpened) {
        mError = openPlatformInput() ? NO_ERROR : UNKNOWN_ERROR;
        mOpened = true;

    while(1) {
        // First, report any devices that had last been added/removed.
        if (mClosingDevices != NULL) {
            *outType = DEVICE_REMOVED;
            delete device;
            return true;
        if (mOpeningDevices != NULL) {
            *outType = DEVICE_ADDED;
            return true;

        pollres = poll(mFDs, mFDCount, -1);

        // mFDs[0] is used for inotify, so process regular events starting at mFDs[1]
        for(i = 1; i < mFDCount; i++) {
            if(mFDs[i].revents) {
                if(mFDs[i].revents & POLLIN) {
                    res = read(mFDs[i].fd, &iev, sizeof(iev));
                    if (res == sizeof(iev)) {
                        *outType = iev.type;
                        *outScancode = iev.code;
                        if (iev.type == EV_KEY) {
                            err = mDevices[i]->layoutMap->map(iev.code, outKeycode, outFlags);
                        } else {
                            *outKeycode = iev.code;
                        return true;
                    } else {
                        // Error handling
Again, most of the details have been stripped out from the above code, but we now see how readEvent() in KeyInputQueue is getting these events from Linux: on first call, EventHub::getEvent scans the directory /dev/input for input devices, opens them and saves their file descriptors in an array called mFDs. Then whenever it is called again, it tries to read from each of these input devices by simply calling the read(2) Linux system call.
OK, now we know how an event propagates through EventHub::getEvent(…) to KeyInputQueue::readEvent(…) then to…) where it could get queued inside WindowManagerService.mQueue (which, as a reminder, extends the otherwise abstract KeyInputQueue). But what happens then? How does that event get to the client application?
Well, it turns out that WindowManagerService has yet another private member class that handles just that:
private final class InputDispatcherThread extends Thread {
    @Override public void run() {
        while (true) {
            try {
            } catch (Exception e) {
                Slog.e(TAG, "Exception in input dispatcher", e);

    private void process() {
        while (true) {
            // Retrieve next event, waiting only as long as the next
            // repeat timeout.  If the configuration has changed, then
            // don't wait at all -- we'll report the change as soon as
            // we have processed all events.
            QueuedEvent ev = mQueue.getEvent(
                (int)((!configChanged && curTime < nextKeyTime)
                        ? (nextKeyTime-curTime) : 0));
            try {
                if (ev != null) {
                    curTime = SystemClock.uptimeMillis();
                    int eventType;
                    if (ev.classType == RawInputEvent.CLASS_TOUCHSCREEN) {
                        eventType = eventType((MotionEvent)ev.event);
                    } else if (ev.classType == RawInputEvent.CLASS_KEYBOARD ||
                                ev.classType == RawInputEvent.CLASS_TRACKBALL) {
                        eventType = LocalPowerManager.BUTTON_EVENT;
                    } else {
                        eventType = LocalPowerManager.OTHER_EVENT;
                    switch (ev.classType) {
                        case RawInputEvent.CLASS_KEYBOARD:
                            KeyEvent ke = (KeyEvent)ev.event;
                            if (ke.isDown()) {
                                lastKey = ke;
                                downTime = curTime;
                                keyRepeatCount = 0;
                                lastKeyTime = curTime;
                                nextKeyTime = lastKeyTime
                                        + ViewConfiguration.getLongPressTimeout();
                            } else {
                                lastKey = null;
                                downTime = 0;
                                // Arbitrary long timeout.
                                lastKeyTime = curTime;
                                nextKeyTime = curTime + LONG_WAIT;
                            dispatchKey((KeyEvent)ev.event, 0, 0);
                        case RawInputEvent.CLASS_TOUCHSCREEN:
                            dispatchPointer(ev, (MotionEvent)ev.event, 0, 0);
                        case RawInputEvent.CLASS_TRACKBALL:
                            dispatchTrackball(ev, (MotionEvent)ev.event, 0, 0);
                        case RawInputEvent.CLASS_CONFIGURATION_CHANGED:
                            configChanged = true;
                } else if (configChanged) {
                } else if (lastKey != null) {
                } else {
            } catch (Exception e) {
                    "Input thread received uncaught exception: " + e, e);
As we can see, this thread started by WindowManagerService is very simple; all it does is
  1. Grabs events queued into WindowManagerService.mQueue
  2. Calls WindowManagerService.dispatchKey(…) when appropriate.
If we next inspect WindowManagerService.dispatchKey(…), we would see that it checks the currently focused window, and calls android.view.IWindow.dispatchKey(…) on that window. The event is now in the user space.