To evaluate the possibilities of improving the interface, a series of qualitative research was done by interviewing expert users, observations, and also reviewing the related literature and the technical data related to the machine. We spent one day on the site and talked to a gorup of operators and obserbved how they work. We alsodid some operating on a simulator at their learning center.
The biggest learning was that processing relatively comprehensive graphical information on the screen in a very short period of time demands a heavy mental work from the operator. The operators need to make frequent decisions based on the observations from the changing surrounding as well as the visual data constantly presented to them on the screens. This leads to overloaded visual information channels and most of the mistakes made by operators occur when they are not able to handle the heavy load of visual input.
To create a wider range of distinctive input gestures, the haptic control combines the two separated joysticks into one device with up to 24 degrees of freedom, allowing both translation and rotation in 3D for each component. This way the control device suits the control of the crane as well as accommodating a bigger number of actions. By using hand gestures instead of buttons, there is better possibility to implement haptic feedback in the configuration. The concept control device is based on a main part with 6 degrees of freedom. There are two arms connected to the main part which could be rotated in three different directions. Each arm is connected to a grip which has 3 degrees of freedom. All of these joints and arms can have force and position sensing feedback. Also, by adding passive touch cues which are presented to the observers skin the grip can be used for notification of events and also to create relatively nonintrusive, ambient background awareness.
Unlike visual interfaces, haptics are rather complicated to communicate and test, therefore a physical prototype is pretty much the only way to demonstrate a haptic interface. I used Processing and Arduino as the development and hardware programming tools for this make a prototype. It consists of more than 80 parts, 6 actuators and 8 sensors.
In this haptic system, tasks can be assigned to the interface components depending on their relations, possibility of being parallel to each other, sequences, timelines, frequency of usage and the possibility of error occurrence. Controlling the coordination of the crane's head is one of the actions that benefits from being performed parallel to the other tasks. This makes it possible to control the position and orientation of the crane at any time during the operation and regardless of the type of tasks performed.
It happens often that the operators need to change the quality type of the tree, should they encounter a curve in the tree. In this case the operator has to push the assigned buttons at a certain time simply to avoid the interference with the other action buttons and sometimes because it is not ergonomically possible. In my observation of user tests on the prototype, I realized that the users are changing the quality of the tree as soon as they notice the need to do so by orienting the right hand at a different angle. Rotating the grip, in this case, is not going to affect the other parts of the operation. This results in a smooth flow in the process.
The structure of this interface makes it possible to implement kinesthetic feedback to the most linkages as the main type of haptic feedback. I learned from observations that the operators do not need to know the exact numbers displayed on the screen. In the prototype, the numbers on screen associated with "the next cutting lengths" and the current position of the saw are replaced with kinesthetic feedback which is in the form of changing angel between the two arms. As the cutting point gets closed the arms become closer to each other, making the operator aware of the next action without having to read the numbers on the screen.
By using gestures instead of buttons, there is a great opportunity to help operator form a mental model that adopts to the their proficiency during a longer time. Also by replacing the buttons with gestures, it is possible for the user to perform a combination of tasks at the same time. Here the operators constantly need to do quick check on the next cutting point and be able to change the assortment (quality) of the tree if needed. by using the rotation gestures, the operators are scanning then tree in the direction of the rotation and can continue the operation at anytime and without the need to activate another button.
As the level of operational skills increases, the role of feedback diminishes and there is a shift in attentional control from low level processes to more conceptual high level processes. This means that too much of feedback for an expert user can have reverse effects on the performance. Depending on the experience of the user and also the task, I observed that users become aware that they are able to filter the type and control the amount of feedback they need to receive. At the beginners level, the haptic feedback can also offer clues as to what an operators options are, through constraints and gentle guidance. Here the user trying to position the hand to feel the force feedback generated by the linear motor as they needed. Also the force feedback is a great way to provide user with the guidance on how to perform. Here in the right image the cut button is controlled by force feedback to let user know if when is the best time to perform the cutting action.
Here is a demo of the prototype, the simulator is developed in Processing. It demonstrates how one single operation can be controlled without the presence of any visual information.
What I learned form user tests was that, as expected, there isa learning curve for using the haptic control. Once the operators learn how to use it, they start exploring new possibilities and learn about the flexibility the system provides for them. The big take-away from testing sessions was that haptic system can flexible enough to adopt to both experienced and the less experienced operators. Also it was noted that a level of visual information is still helpful (and needed) for better operation.
Although the touch sense is generally used in conjunction with other sensory to have the best possible results, by using this prototype I show that the operations goal can be accomplished without the existence of any visual clue and without using buttons as means of input. Reconfigurability is an important advantage of the haptic interfaces as they can change their feedback in response to the environment they control. I observed this to some degrees in using the prototype with the available functions. Haptic feedback can reduce motor or visual strain when the manipulation is exacting or prolonged. It can also offer selective, suggestive guidance with a cue that the user can smoothly and variably override. For long term operations like harvesting, continuous control of haptic has benefits over discrete input of buttons. Human adaptation to dynamic interaction forces and the ability to learn the control of extrinsic states has previously been shown in visuomotor tasks. In a feedback task, the adaptation can be probed by altering the dynamics of the object or the sensory information regarding the objects motion. Muscle memory is a great source of manual skill and the frequent and patterned nature of the tasks of this haptic architecture helps to structure stylized gestures, reducing cognitive load and long extra steps.