Monday, 28 October 2013

Interactive Devices: Lab 2 Group Post - Final Idea

Following our discussion during the last workshop Amir went on a hunt to find an idea for our interactive device. After much research, and his passion of solving puzzles, he came up with the idea of creating an interactive Rubik’s cube.
The idea is to be able to control your computer using a Rubik’s cube. This includes simple tasks such as zooming in or scrolling down when reading a paper, unlocking your laptop using certain combinations, controlling a 3D simulation, etc.
Amir purchased couple of Rubik’s cubes and held a meeting with rest of us the next day. Everyone was on board and very excited about the idea. During the meeting he disassembled one of the cubes to explain to the group how we might go about designing the cube.
We then went to the HCI lab and pitched our idea to some of the research assistants and got positive feedback and techniques in which we can refine our device.


The uses of this device vary from accessing media player to scrolling pages on a web browser or even controlling a 3D simulation. The way it differs from keyboard or mouse is that first it’s wireless, second it can be personalised to suit users’ needs and third its just a fun way of interacting with your computer.
The device moves us away from WIMP interfaces and provides an opportunity to users to personalise the way they interact with their personal computers. The idea can further be extended such that the cube becomes more than just a controller; the cube itself can be a personal device where each of the 27 smaller cubes are screens. Then it can be used to do things like answering your calls, an alarm clock or even checking your emails.

This is the list of materials we are going to need for the completion of the project:

  1. Encoders from Trackball Mice
  2. Transparent Acrylics
  3. RGB LEDs
  4. Arduino Board
  5. Buttons

Tuesday, 22 October 2013

Interactive Devices: Lab 2 Individual Post - Introduction to electrical tools & Arduino

In this lab we were introduced to several tools we might use to develop or test our projects. Our group was supplied with a new laptop, an Arduino starting kit and some hardware testing machines.
We started by setting up the Arduino board and install its SDK on the notebook. Subsequently we started implementing some example Arduino projects in order to understand how to program and connect all the wires and various components together.




Our first example program instructed us how to build a circuit on the supplied breadboard in order to light 3 LEDs. The behaviour of the program we implemented was to light each LED once in order and then start again, infinitely.
By adding a Push Down Switch control flow could also be modified; when the switch was pushed the lights would stop and when it wasn't pushed control flow was as usual.


In the second example the manual introduced us to a temperature sensor which could give us some information on the voltage and temperature of the circuit as the temperature changed.

During this lab I also took some time to familiarise myself with hardware such as the Leap Motion and the Oculus Rift so that I could understand better their behaviour and see if we could incorporate their functionality in our project.

Interactive Devices: Lab 1 Group Post - Presented Ideas

As a group we crystallised our ideas into two main ones.

The first idea is inspired from the movie The Dark Knight (Batman), where the main character uses a cluster of mobile phones as an alternative to sonar devices to map and visualise an enclosed environment.

Since the mobile devices in the market today are not sensitive enough to reliably intercept the sound signals, our approach to this is going to be to use sonar devices instead as a proof of concept.

This technology would be beneficial in environments where vision is limited or inexistant, such as:
  • Looking around a corner or behind a wall.
  • Fire rescue and combat, where vision is quite limited (smoke).
  • Helping the visually impaired.


The second idea is a subliminal controller.
The goal is to provide subliminal feedback to the user and input to the machine. The main output is a “RGB scent printer” which would be able to create a range of smells based on the users experience within the application and an IllumiRoom-type of visual output which would be synced with the scents. The subliminal input is a bracelet which detects the user’s pulse and changes the in-game experience based on it.


This technology could be used in:

  • Gaming
  • Cinemas
  • Combat Simulations

Tuesday, 15 October 2013

Interactive Devices: Lab 1 Individual Post - Project idea brainstorming

The purpose of the first lab was to gather in the groups and start brainstorming project ideas. The lab was divided in ten minute phases.

In the first phase we had to come up with up to ten ideas each and write them on post-it notes. Then we had to stick everyone's ideas on a board to start looking at them.



In the second phase we started looking as a group at all the ideas and try to categorise them in order to get a clearer picture. We ended up dividing the ideas by platform: Arduino, Occulus Rift and Tablet.



In the third phase we needed to decide which ten ideas to keep and scratch the rest. The method we used to do this was that each person had to present very quickly their idea and explain why it was good. A lot of the time we managed to combine ideas together because of their similarities which helped a lot.

The remaining ideas were:
  1. Sonar mapping of buildings through mobile phones
  2. Surface made of adjustable rods with touch screens on them to show a map's terrain in 3D
  3. Occulus Rift game with Kinect - using reality to make the environment in the game and send back to the Occulus Rift
  4. Wristband to control heartbeat, temperature and pressure to change video game dynamics
  5. Touch-screen jacket
  6. Modifying terrain through vibration/electronic impulses
  7. Tablet and TV interaction with Kinect - using eye tracking to scroll webpages on TV and gestures to move between pages for books
  8. Sending electrical impulses to brain
  9. Water clone/water pressure jetpack
  10. Hologram which could be moved in the space and resized
In the fourth phase we had to narrow the ideas down to 2. This was not easy as some members felt strongly about their ideas but criticism received pretty well overall.
Eventually we were left with:
  1. Subliminal Controller - Using heartbeat monitor to change game dynamics and outputting smells based on the game environment
  2. Bat Mapper - Sonar mapping of buildings through mobile phones
In the fifth and final phase we prepared a presentation and presented it to the rest of the class. We named our team the Subliminal Bats, a combination of our two ideas. The presentation covered three key points of the two ideas we had come up with:
  1. What is the big idea and why is it interesting?
  2. Diagram of the system - how will it work?
  3. Three use cases

Subliminal Controller
The idea is to provide olfaction feedback to the user, this is interesting because up until now only acoustic, tactile and visual feedback has been given by electronic devices. Furthermore, for games, the players will also have their heartbeat monitored to dynamically change gameplay; the faster the heartbeat, the harder to see things in the game.
Our aim is to provide a new dimension of entertainment by increasing realism.

This application has three main use cases:
  1. Video games - mentioned above
  2. Cinemas - adding realism to the film
  3. Combat Training - adds combat field realism

Bat Mapper
The idea is to be able to map a building by using electronic impulses emitted by smartphones. As the world becomes more and more technologically ubiquitous, this is an interesting research idea to see if current smartphones are powerful enough to provide such information so that old and bulky machinery can be replaced.

This application has three main use cases:
  1. Accessibility - blind people could use their phone to "see" objects 
  2. Fire rescue - understanding which parts of the building in a fire are still intact can lead to safety
  3. Architecture - enables architects to check for irregularities in a building, check if anything is missing by looking at outputted model
For anyone who's interested, here is the recording of our presentation:



Thursday, 10 October 2013

Paper Review - "Reality Based Interaction: A Framework for Post-WIMP Interfaces"

From the early days of the Command-Line to the present Window Icon Menu Pointing device (WIMP) much has changed, and still changes quickly. It seems the more technology evolves the quicker new interfacing systems are invented.
More recently we've seen the boom of the touch-screen, which has revolutionised the way we interact with our electronic gadgets by allowing us to perform actions in a much more simple and intuitive way.

Proposed in the paper is yet a new way to interact with our gadgets: Reality Based Interaction (RBI). Based on 4 key principles, RBI's goal is to bring reality to our everyday interactions with our gadgets like never before.



RBI's first principle is Naïve Physics (NP); the common sense knowledge humans have about the physical world. Physics attributes such as mass and gravity are simulated on interfaces to give a more real feel, e.g. when scrolling through the phonebook on an iPhone or Android device.

The second principle, Body Awareness and Skills (BAS), refers to the familiarity and understanding that people have of their own bodies. Using our limbs to interact with interfaces adds even greater reality as humans interact with real objects on a daily-basis.

Environment Awareness and Skills (EAS) deals with the clues the environment gives us about what surrounds us and how to navigate it.
The last principle, Social Awareness and Skills (SAS), include humans' ability to communicate verbally and non-verbally, exchange objects and work with others on a task. Here's an example:


Adding realism to an interface doesn't always pay off though, trade-offs need to be made between reality and the RBI's principles in order to produce an effective product.
A good point made by the authors is that reality should be given up only when other desired qualities such as expressive power, efficiency, versatility, ergonomics, accessibility and practicality are gained.

In conclusion, the material in the paper is certainly exciting and interesting but no concept or theory of the implementation of such methods is mentioned, which makes it feel "hand-wavy" (pun intended).


Sunday, 6 October 2013

Paper Review - “Can We Beat the Mouse with MAGIC?”

The 2013 CHI paper by Ribel Fares et al. expands on the idea of faster mouse movement on desktops by combining physical movement with eye-tracking.
This concept is not new and in fact a similar implementations already exist, the latest of which: Conservative MAGIC.

Conservative MAGIC improves on previous methods by warping the mouse cursor only when signalled by a mouse movement by the user. Although this method improves on user-friendliness it is still not ideal because humans have a harder time following warping items rather than following their movement.

Mouse cursor warping was then replaced by dynamic speed movement; the mouse sensitivity is decreased when the distance between the cursor and gaze centre is small, and increased when large. This resulted in yet another issue: when the user went past the target the speed could quickly increase making the cursor shoot far away.

The proposed implementation in the paper is called Animated MAGIC. It is based on Conservative MAGIC but improves on both accuracy and speed by not only increasing the cursor speed during selection, but also homing it in towards the gaze centre. This method prevents missing of the target and gives a gradual speed change which is easy and comfortable to follow with your eyes.



Unfortunately eye-tracking accuracy is not very precise and can hinder the effectiveness of Animated MAGIC. To improve it the authors implemented a Dynamic Local Calibration method which uses selections as local calibration points.

The results showed that MAGIC methods perform much better than standard mouse movement and Dynamic Local Calibration improved both Conservative and Animation MAGIC. Furthermore, Animation MAGIC increased performance by 8% compared to standard mouse movement and was found to be more usable than all other methods.