Friday, 20 December 2013

Interactive Devices - Unit Conclusion

The unit overall has taught me much, especially from the hardware side of things. It was very interesting to see how different pieces of software could connect in order to transmit hardware signals. In our case:
Arduino board -> Arduino program -> OpenFramworks program -> Windows

This course also helped to reinforce what I already suspected: hardware is not the side of computing that I am interested in. It's too messy and low-level for my liking. What I do like though is the idea of creating something from scratch in a team which works as a pipeline where team members specialised in hardware create an initial device/machine which I can then build on.

I personally found it challenging to write a research paper for a project which could be shown at a conference such as CHI. Finding the appropriate wording was not easy and it took a long time. In addition, all the extra parts a Work In Progress CHI paper needs, to be presented, augmented the time taken to write the report.

Finally, I think the unit would be beneficial to all students in Computer Science which have never tasted the adventure that is hardware. The reason is not to only encourage them to pick up products such as Arduino and make something creative/useful but also to see if hardware is something that they enjoy and never had the time to try, possibly changing their career path.

Interactive Devices: Final Lab - Rubikon Demo

On the morning of the demo, the group gathered to decide in which order the applications would be shown; this was critical for a good presentation flow.
Under my suggestion, we decided to start from authentication because usually a computer session starts from logging in. After authentication through our custom login screen, we would switch applications to go to the Desktop. Here we would use the folder navigation features to travel through the icons to select Google Chrome and launch it. Once in the browser, features such as: switching tabs, scrolling down/up and going back and forward in a page's history would be highlighted. These actions would all show how Rubikon works as an input device.

Even though we had a prototype, our final product would have been a personal device. This meant output would be required as well. To highlight this fact at the end of the presentation, a specific button would be pressed to play a Christmas song and light the LEDs on the cube, with different colours, according to the song's rhythm.
With the demo order finalised, we concluded the meeting by deciding what to say during the presentation to emphasise the potential of our device.

The feedback from the people that came to see and try out our device during the demo was very positive. In addition, everyone seemed to enjoy using the cube as an alternative means of input. However it was hard to convey how Rubikon would be, ideally, a personal device due to the early stages of the prototype. For this reason I brought to the demo a mini touch screen to give an idea of the feasibility of the final product.

Tuesday, 10 December 2013

Interactive Devices: Lab 5 Individual Post - Implementing the software features

From the last lab we had an Arduino program capable of transmitting rotary encoder inputs to our laptop. In order to make use of these inputs myself and Vlad, after much research, decided to use the OpenFrameworks library to implement the features we set out at the beginning of the project i.e. folder navigation, browser navigation and authentication.

We started by sending the device's rotations from Arduino to OpenFrameworks and subsequently created a Node JS server, listening for commands, in order to perform actions on the browser. As we implemented the new system we quickly remembered that the most important, if not all, computer commands are mapped to certain keyboard shortcuts, even better, these shortcuts are often consistent across programs. Luckily Windows provides a C++ function called keybd_event() which simulates key strokes.

Example:

keybd_event(VK_TAB, 0, 0, 0);                                  // Presses tab key
::Sleep(1000);
keybd_event(VK_TAB, 0, KEYEVENTF_KEYUP, 0);   // Releases tab key

For our authentication we decided to start from the Windows lock screen. The locking action on Windows has a shortcut, Windows Key + L, so we simulated this combination with our program. This first attempt didn't work so we decided to try other combinations to see where the problem lied. After a few successful attempts I quickly realised Windows had a security protocol in place which clearly disabled us from locking the computer programmatically.
As a solution we decided to make our own authentication screen which would record key presses in the background and on pressing enter, if the password was correct, it would let us perform the other previously mapped actions, basically locking down any cube functionality until access was granted.

Using a similar method, myself and Vlad then implemented the following applications for the demo: folder navigation, browser navigation and application switching.

Interactive Devices: Lab 4 Individual Post - Building the prototype

The day before the lab we received the electronic components required to start building our cube; seven rotary encoders. Each works as a rotary encoder, push button and LED. The degree of rotation is not restricted to 360 and it knows whether it's rotating clockwise or counter-clockwise while the button is a simple push down, i.e. it has a binary state. The LED is an RGB LED capable of displaying light of any RGB combination, this means vibration feedback was not needed anymore as we could just give feedback by lighting the LEDs.


With the components in our possession myself and Amir went to the electronic labs to solder each of the eight pins. I soldered the first potentiometer to its circuit board but after thinking about it, we
decided to do away with the boards because otherwise we would never be able to fit all the rotary encoders inside the cube.
When we got back to our lab the whole team started looking at how to connect the rotary encoder to the Arduino board by following a Bildr guide provided by the rotary encoder's page on Sparkfun.




After a fair bit of trial and error with Arduino coding we eventually managed to write a program which received movement data when rotating the encoder. Following from our success we quickly decided to try to get the button to work as well so we connected the push button wires to the Arduino board and edited our program. Soon enough our program could receive both rotation and push data.

Monday, 18 November 2013

Interactive Devices: Lab 3 Group Post - Research Papers Relating to the Project

In order to gather more information and possible ideas to include in our prototype, as a group we searched the Internet for, possibly recent, research papers which were related to our project. Unsurprisingly, there are quite a few academic papers on interacting with computers using alternative devices, which is our aim. Furthermore, many papers can be found relating to authentication, this being one of the primary features we plan to implement.

Below are the 10 best papers we have found:
  1. Spontaneous Interaction with Everyday Devices Using a PDA
  2. SideSight: multi-"touch" interaction around small devices
  3. Reality-based interaction: a framework for post-WIMP interfaces
  4. The cubic mouse: a new device for three-dimensional input
  5. Real-time 3D interaction with ActiveCube
  6. Sculpting: an interactive volumetric modelling technique
  7. Reducing shoulder-surfing by using gaze-based password entry
  8. An Association-Based Graphical Password Design Resistant to Shoulder-Surfing Attack
  9. Novel Shoulder-Surfing Resistant Haptic-based Graphical Password
  10. Weighted finite-state transducers in speech recognition

Wednesday, 6 November 2013

Interactive Devices: Lab 3 Individual Post - Interactive Rubik's Cube design

In this week's lab we started thinking about the design for our first prototype so that we could order all the parts we needed and start the building phase. After much research on the internet we found that in order to record rotations in the cube we needed some encoders, usually found in older trackball mice; six of them, one for each axis.

As we further developed the design we realised not only encoders might not be a good solution because of size and movement processing issues, but also that Arduino might have something similar.
Luckily there is a component called a Potentiometer for Arduino which is exactly what we needed. This switch resembles a light dimmer which changes value according to how much you turn it.


The next problem was that this Potentiometer would stop at 360 degrees and we needed something that could turn continuously. After more research on the web we found a version of the Potentiometer that behaved exactly like we wanted, so we decided to use an Arduino board as the core processing unit for our cube.

In our first prototype we also wanted to incorporate RGB LEDs, one for each cube, in order to give feedback to the user as to whether the rotation mapped successfully to a command (green) or not (red). While trying to figure out how to insert them and connect them we realised this might not be feasible because every time a user would rotate the cube, the LED's wires would also rotate and eventually snap.
Considering LED feedback is a minor, more aesthetic feature, it seemed pointless to design the whole cube around it, so I proposed to swap visual feedback for haptic feedback. This decision was well accepted by the group and so we will now start looking at how to include vibration.


Monday, 28 October 2013

Interactive Devices: Lab 2 Group Post - Final Idea

Following our discussion during the last workshop Amir went on a hunt to find an idea for our interactive device. After much research, and his passion of solving puzzles, he came up with the idea of creating an interactive Rubik’s cube.
The idea is to be able to control your computer using a Rubik’s cube. This includes simple tasks such as zooming in or scrolling down when reading a paper, unlocking your laptop using certain combinations, controlling a 3D simulation, etc.
Amir purchased couple of Rubik’s cubes and held a meeting with rest of us the next day. Everyone was on board and very excited about the idea. During the meeting he disassembled one of the cubes to explain to the group how we might go about designing the cube.
We then went to the HCI lab and pitched our idea to some of the research assistants and got positive feedback and techniques in which we can refine our device.


The uses of this device vary from accessing media player to scrolling pages on a web browser or even controlling a 3D simulation. The way it differs from keyboard or mouse is that first it’s wireless, second it can be personalised to suit users’ needs and third its just a fun way of interacting with your computer.
The device moves us away from WIMP interfaces and provides an opportunity to users to personalise the way they interact with their personal computers. The idea can further be extended such that the cube becomes more than just a controller; the cube itself can be a personal device where each of the 27 smaller cubes are screens. Then it can be used to do things like answering your calls, an alarm clock or even checking your emails.

This is the list of materials we are going to need for the completion of the project:

  1. Encoders from Trackball Mice
  2. Transparent Acrylics
  3. RGB LEDs
  4. Arduino Board
  5. Buttons

Tuesday, 22 October 2013

Interactive Devices: Lab 2 Individual Post - Introduction to electrical tools & Arduino

In this lab we were introduced to several tools we might use to develop or test our projects. Our group was supplied with a new laptop, an Arduino starting kit and some hardware testing machines.
We started by setting up the Arduino board and install its SDK on the notebook. Subsequently we started implementing some example Arduino projects in order to understand how to program and connect all the wires and various components together.




Our first example program instructed us how to build a circuit on the supplied breadboard in order to light 3 LEDs. The behaviour of the program we implemented was to light each LED once in order and then start again, infinitely.
By adding a Push Down Switch control flow could also be modified; when the switch was pushed the lights would stop and when it wasn't pushed control flow was as usual.


In the second example the manual introduced us to a temperature sensor which could give us some information on the voltage and temperature of the circuit as the temperature changed.

During this lab I also took some time to familiarise myself with hardware such as the Leap Motion and the Oculus Rift so that I could understand better their behaviour and see if we could incorporate their functionality in our project.

Interactive Devices: Lab 1 Group Post - Presented Ideas

As a group we crystallised our ideas into two main ones.

The first idea is inspired from the movie The Dark Knight (Batman), where the main character uses a cluster of mobile phones as an alternative to sonar devices to map and visualise an enclosed environment.

Since the mobile devices in the market today are not sensitive enough to reliably intercept the sound signals, our approach to this is going to be to use sonar devices instead as a proof of concept.

This technology would be beneficial in environments where vision is limited or inexistant, such as:
  • Looking around a corner or behind a wall.
  • Fire rescue and combat, where vision is quite limited (smoke).
  • Helping the visually impaired.


The second idea is a subliminal controller.
The goal is to provide subliminal feedback to the user and input to the machine. The main output is a “RGB scent printer” which would be able to create a range of smells based on the users experience within the application and an IllumiRoom-type of visual output which would be synced with the scents. The subliminal input is a bracelet which detects the user’s pulse and changes the in-game experience based on it.


This technology could be used in:

  • Gaming
  • Cinemas
  • Combat Simulations

Tuesday, 15 October 2013

Interactive Devices: Lab 1 Individual Post - Project idea brainstorming

The purpose of the first lab was to gather in the groups and start brainstorming project ideas. The lab was divided in ten minute phases.

In the first phase we had to come up with up to ten ideas each and write them on post-it notes. Then we had to stick everyone's ideas on a board to start looking at them.



In the second phase we started looking as a group at all the ideas and try to categorise them in order to get a clearer picture. We ended up dividing the ideas by platform: Arduino, Occulus Rift and Tablet.



In the third phase we needed to decide which ten ideas to keep and scratch the rest. The method we used to do this was that each person had to present very quickly their idea and explain why it was good. A lot of the time we managed to combine ideas together because of their similarities which helped a lot.

The remaining ideas were:
  1. Sonar mapping of buildings through mobile phones
  2. Surface made of adjustable rods with touch screens on them to show a map's terrain in 3D
  3. Occulus Rift game with Kinect - using reality to make the environment in the game and send back to the Occulus Rift
  4. Wristband to control heartbeat, temperature and pressure to change video game dynamics
  5. Touch-screen jacket
  6. Modifying terrain through vibration/electronic impulses
  7. Tablet and TV interaction with Kinect - using eye tracking to scroll webpages on TV and gestures to move between pages for books
  8. Sending electrical impulses to brain
  9. Water clone/water pressure jetpack
  10. Hologram which could be moved in the space and resized
In the fourth phase we had to narrow the ideas down to 2. This was not easy as some members felt strongly about their ideas but criticism received pretty well overall.
Eventually we were left with:
  1. Subliminal Controller - Using heartbeat monitor to change game dynamics and outputting smells based on the game environment
  2. Bat Mapper - Sonar mapping of buildings through mobile phones
In the fifth and final phase we prepared a presentation and presented it to the rest of the class. We named our team the Subliminal Bats, a combination of our two ideas. The presentation covered three key points of the two ideas we had come up with:
  1. What is the big idea and why is it interesting?
  2. Diagram of the system - how will it work?
  3. Three use cases

Subliminal Controller
The idea is to provide olfaction feedback to the user, this is interesting because up until now only acoustic, tactile and visual feedback has been given by electronic devices. Furthermore, for games, the players will also have their heartbeat monitored to dynamically change gameplay; the faster the heartbeat, the harder to see things in the game.
Our aim is to provide a new dimension of entertainment by increasing realism.

This application has three main use cases:
  1. Video games - mentioned above
  2. Cinemas - adding realism to the film
  3. Combat Training - adds combat field realism

Bat Mapper
The idea is to be able to map a building by using electronic impulses emitted by smartphones. As the world becomes more and more technologically ubiquitous, this is an interesting research idea to see if current smartphones are powerful enough to provide such information so that old and bulky machinery can be replaced.

This application has three main use cases:
  1. Accessibility - blind people could use their phone to "see" objects 
  2. Fire rescue - understanding which parts of the building in a fire are still intact can lead to safety
  3. Architecture - enables architects to check for irregularities in a building, check if anything is missing by looking at outputted model
For anyone who's interested, here is the recording of our presentation:



Thursday, 10 October 2013

Paper Review - "Reality Based Interaction: A Framework for Post-WIMP Interfaces"

From the early days of the Command-Line to the present Window Icon Menu Pointing device (WIMP) much has changed, and still changes quickly. It seems the more technology evolves the quicker new interfacing systems are invented.
More recently we've seen the boom of the touch-screen, which has revolutionised the way we interact with our electronic gadgets by allowing us to perform actions in a much more simple and intuitive way.

Proposed in the paper is yet a new way to interact with our gadgets: Reality Based Interaction (RBI). Based on 4 key principles, RBI's goal is to bring reality to our everyday interactions with our gadgets like never before.



RBI's first principle is Naïve Physics (NP); the common sense knowledge humans have about the physical world. Physics attributes such as mass and gravity are simulated on interfaces to give a more real feel, e.g. when scrolling through the phonebook on an iPhone or Android device.

The second principle, Body Awareness and Skills (BAS), refers to the familiarity and understanding that people have of their own bodies. Using our limbs to interact with interfaces adds even greater reality as humans interact with real objects on a daily-basis.

Environment Awareness and Skills (EAS) deals with the clues the environment gives us about what surrounds us and how to navigate it.
The last principle, Social Awareness and Skills (SAS), include humans' ability to communicate verbally and non-verbally, exchange objects and work with others on a task. Here's an example:


Adding realism to an interface doesn't always pay off though, trade-offs need to be made between reality and the RBI's principles in order to produce an effective product.
A good point made by the authors is that reality should be given up only when other desired qualities such as expressive power, efficiency, versatility, ergonomics, accessibility and practicality are gained.

In conclusion, the material in the paper is certainly exciting and interesting but no concept or theory of the implementation of such methods is mentioned, which makes it feel "hand-wavy" (pun intended).


Sunday, 6 October 2013

Paper Review - “Can We Beat the Mouse with MAGIC?”

The 2013 CHI paper by Ribel Fares et al. expands on the idea of faster mouse movement on desktops by combining physical movement with eye-tracking.
This concept is not new and in fact a similar implementations already exist, the latest of which: Conservative MAGIC.

Conservative MAGIC improves on previous methods by warping the mouse cursor only when signalled by a mouse movement by the user. Although this method improves on user-friendliness it is still not ideal because humans have a harder time following warping items rather than following their movement.

Mouse cursor warping was then replaced by dynamic speed movement; the mouse sensitivity is decreased when the distance between the cursor and gaze centre is small, and increased when large. This resulted in yet another issue: when the user went past the target the speed could quickly increase making the cursor shoot far away.

The proposed implementation in the paper is called Animated MAGIC. It is based on Conservative MAGIC but improves on both accuracy and speed by not only increasing the cursor speed during selection, but also homing it in towards the gaze centre. This method prevents missing of the target and gives a gradual speed change which is easy and comfortable to follow with your eyes.



Unfortunately eye-tracking accuracy is not very precise and can hinder the effectiveness of Animated MAGIC. To improve it the authors implemented a Dynamic Local Calibration method which uses selections as local calibration points.

The results showed that MAGIC methods perform much better than standard mouse movement and Dynamic Local Calibration improved both Conservative and Animation MAGIC. Furthermore, Animation MAGIC increased performance by 8% compared to standard mouse movement and was found to be more usable than all other methods.