Blog

To Game Or Not To Game? Gamification in Learning.

Written by Ilyena Hirskyj-Douglas edited from the original written with Fabricio Oliveira & Camilla Groth.  

In January I started a teaching course at Aalto which included how to activate students learning within a classroom. This blog posts navigates an introduction into active learning and basic gaming methods, then giving examples of storylines used within teaching to demonstrate gamification and storytelling in classrooms. Lastly a full (episode one style) storyline of Spartacus is included, which was taught to students within the course.

Want further links? You can also find out powerpoint presentation and links to other gamification methods at the bottom of this page.

Screen Shot 2018-04-24 at 11.15.53.png

INTRODUCTION

Active learning is student centred and promotes deep learning as the students actively take responsibility of their learning process. This can happen intuitively if the intrinsic motivation to learn something for the purpose of achieving something positive can be induced. Games offer such environments, thus in particular Dungeons and Dragons has been extensively used for educational purposes when encouraging students with low confidence and low study motivation.

When engaging learners in active participation and in team work around a common subject, the experience is shared and the learners create knowledge together. In ideal cases this speeds up the learning process as the group is “more than one” and mutual experiences can also be discussed and reflected on from more perspectives than one.

Roleplay includes active participation, collaboration and seeing things from a new perspective. It also challenges the static audio-visual and “PowerPoint centric” lecturing style so often ruling the university teaching. In role playing, you can include more physical and bodily learning elements and tasks, incorporating more experiential learning processes. Additionally, you can activate the learners’ intrinsic motivation by making the learner submerge into the world of the subject – in our case a distant historical setting that becomes a lived experience.

BASIC GAMING METHODS

When using a game as the setting for learning activities, role playing is one key method. Here a game master (the teacher) presents the fictional setting, arbitrate results of character actions, and maintains the narrative flow. The participants will chose a role character from a set of options determined by the game master. This can include also customising the character and its properties such as skills, intentions or position in the game (for instance friend or enemy, historical character or someone in a particular physical or psychological state). The participants or players then assume the role of that character and start playing, acting and making decisions within the perspective of that character embracing the role formed.

As the game master leads the game and the situations that the characters encounter, the game is not entirely in the hands of the players but the storyline is pushing the events and players towards conducting “learning activities” that help the players achieve the intended learning outcomes.

Screen Shot 2018-04-24 at 11.17.36.png

STORYLINE

Setting the storyline is important as this is the driving force and environment of the whole learning situation, and thus is promotes the intrinsic motivation as well as secures the learning outcomes. Below are some storyline examples, which I have written, for different learning purposes. Each purpose has a different activity (here dice roll and correct questions) within different contexts (fantasy and math).

Dice Roll: Fantasy

You all wake up slowly shaking the hazy sleep from your eyes and look around you at the remaining metal of the ship torn apart, fragments of what was once your home lying – still warm from impact- at your feet. All of your teams’ clothes are drenched in mildew green sweat as the humidity from the jungle around you rises to seep into your skin. You hear someone coughing in the distance… This can’t be real is all you keep on thinking. This can’t be real. You check who is around you… *people all say who they are and what race & class*. You all stand up, wiping the green stuff from your slightly tattered body suits and you see a box in the corner glinting under the green jungle leaves its golden lock. Your team walks over and someone kicks it hard but it just won’t budge. Pick someone from your team to roll a dice, if you roll it above 5 then the box opens, if not, then the key you try to used gets jammed inside of the chest*.  For those who opened the box, as you look inside you find drinks which you all quickly slurp down *+1 strength*

Correct Questions: Math

As you stand before the towers of Yasha you look down to the ground and realise this may be the last step you will ever tread. Fearlessly your group look at each others eyes, not speaking a word. You all turn and begin to realise the task ahead. He must be stopped. The evil King has ruled long enough harrowing your lands further into the disappear whilst his wealth grows richer behind these steep walls. As you group stands in silence contemplating you feel the ground shake, softly at first and then harder and realise what is coming. The Pentagrams on horses are riding towards you over the mountain of Pythagoras screaming “All is number”, ready to quarter you as you stand. Quickly – to scale this wall you need to give the formula to work out one side of the triangle to figure out how high you have to climb. If you have spell book 39, you get to turn over Hint 2 to aid you on your quest. *insert 30 second timer*  

Screen Shot 2018-04-24 at 11.24.03

OUR MODEL:

The content of the above learning situation can differ and the roleplay settings can be more or less complicated. Thus, adjustment needs to be done depending on the required learning outcomes.

The idea here, is to teach history through a roleplay game that encourages group work and solving problems together in order to advance and learn more. There are elements of competition between the groups and they have to collect points (or they lose points) as they go through the learning “path”. The storyline  is based on historical facts as the base and the environment for the roleplay scenario. The students have to solve problems in order to advance their knowledge and to advance in the game. They are given a set amount of “lives” and they either gain or lose lives (hearts on their gaming paper which was given) and in the end the group with the most hearts win. Each problem solving task is naturally bound to the topic of historical facts but the tasks are different in nature so that the diversity of the group is utilised (everyone can participate, not just one leader).

440654837_ddf4bb1f6e.jpg

We are in greece, the year is 73 BC you are an enslaved greek in the house of Pupus Piso in the center of the city.

The air is warm around you and dry as the earth. You slowly wake up opening your eyes to see the stone room and the cold earth pushed hard into your back. You sit and sigh taking a moment’s rest listening to the fishing nets being thrown into the water behind your house. Your master Pupus Piso is angry with you. This psychological warfare between master and slave reached its tipping point as he told you he had no time to talk to you anymore. You look through the door at the golden goblets of wine strewn on the floor from last night’s elegant dinner-party thrown by your master the senator. He told you he did not have time for your idle talk anymore and then sent you to check on Clodius who had yet to arrive. After several times of sending you to find this guest of honour he asked why Clodius has yet to come, to which you answered “he declined”. In despair and anger Piso asked you why you didn’t tell him earlier and you told him he never asked and he told you not to talk until spoken too. Now your fight against the inferior position has left a wrath on your back from the anger of Piso. You’ve been taking too much influence from the slave rebellions against the Roman Republic, known as the servile wars. Maybe rebelling was not all that it was cracked up to be. You rub your back tenderly and try to remember what else Piso has been referring to the thirds servile war as.. he was always shouting about something between his glasses of wine and feasts filled of trapped beasts. 

Question: Quick you have 30 seconds to google what was the third servile war also known as? Report your number via the messenger with the yellow answer paper to the game master or perish a heart.

Ahhhh the Gladiator War and The War of Spartacus, you remember now. They were only a few hundred meters to the south when 70 of the enslaved gladiators escaped from the gladiator school in Capua. So far they have defeated every Roman force sent to recapture them. You doubt they will last much longer once the Roman Senate (your master included) grow more alarmed. They consider this more of a policing matter than war as slaves are considered natural and necessary. You walk towards the main room picking up golden goblet as you go, feeling the warm breeze from the north run through your undergarment chiton and your himation cloak. You see the barley risen sun and gather some gold coins from your supply to walk into the city centre to pick up more barley bread to dip in wine to make a breakfast of Akratisma, before attending to your chores. Maybe you will even get some figs or olives to try and please your master.

As you enter the Agora, the Greek marketplace, you hear the bustling of the stalls around you selling their goods. Someone to the right of you is shouting political agendas about the new Senate politics wanting a stronger army against Spartacus. You snigger as they mention military terms applied to slaves. A slave as a leader? Who heard of such controversy and hierarchical military leadership amongst slaves. You buy a loaf of bread after haggling for a lower price and begin to head back before you overhear someone dropping your master’s name on the other side of the stall. You lean closer to listen in.

“… He is not a good slave. He should have just cut him where he stands”

“Ahh!! Now now Aesop, you know he can make some gold from him. He is disobedient but that can all be beating out”

“You mean will be beaten out, I saw Piso warming the fire before – for a branding show!!”

You freeze and drop the bread, rushing to pick it up from the dusty floor before composing yourself and walking towards the nearest alleyway. Maybe Spartacus has it right, you angrily think clenching the bread so hard you feel its thick crust crack under your fingernails. You heard whispers they are up towards the capua volcano hiding between the herdsmen and shepherds of the region. Without thinking you walk towards the volcano.

As you enter the rebel slave camp in Mount Vesuvius, you can see the thin but resilient faces of the slaves. It looks like stubbornness has worn thicker than the roman starvation had planned. Some slave from the house of Clinias just told you that they had defeated a second expedition, almost capturing the praetor commander, killing his lieutenant and seizing the military equipment which would explain the shiny new metal shields that glinted from the bottom of the volcano like warning signs to the roman empire. It appears more slaves have flocked to the Spartan forces as well as you..  you squint your eyes into the distance to estimate the approximate number of slaves around you

Question: Quick, write a number on the back of your heart paper estimating the number of slaves in Spartacus ranks after the second expedition. You have 30 seconds to write and hold up your paper.

Answer: If you guessed between 60-80,000 then you are close, get one heart. If you guessed outside of this, perish a heart.

You estimated 70,000 – Ares does seem to favour these rebels showing Sparatucs as an excellent tactician. All around you men are training echoing out the gladiator show style of military practice. You hear whispers of further raids in Nola, Nuceria, Thurri and Metapontum. 

Two years later your marching with the Spartacus branch leaving your winter encampment to move northwards. The Roman Senate is alarmed at your troops defeating the praetorian forces and words is spreading that they have dispatched a pair of consular legions under the command of Lucius Gellius Publicola and Gnaeus Cornelius Lentulus Clodianus to slaughter you all. Sparticus ordered the rebelen to split into two; half to his man at hand Crixus. So far Crixus,which formed an troop of 30,000 has been defeated near Mount Garganus but now it is Spartacus groups turn. It’s time for your turn. You think to yourself this is everything that you have been fighting for, for the freedom of yourself, for the freedom of others, and the freedom of the future slaves to come. You lay down your circular shield and slide on your breast plate on, clipping it swiftly at the side to protect your heart. You can’t stop hearing your heart beating beating deep within your ears. You reach down to the rich volcanus ground for your….

Question: Pick one member of your group to turn over the piece of paper and charades style, which is where one person acts out whilst the others guess, re-enact the weapon that the greek warriors used.. you have 30 seconds and you must not speak until you have guessed. If you do not guess within this time you lose a heart. If you have no hearts left your team is dead.

Answer: Spear

The sharp ion head of the spear glints in the sunshine as the wooden shaft lays softly on your palm.

Will the Spartan army defeat the Senate republic? What will become of Spartacus and of the slaves that joined him? When will the rebellion zeise and slavery end in the greek era?

Join us next time to find out.

Whilst the students enjoyed this class, they did comment that the storyline could have been more complexed and they felt a bit rushed. Ideally, these gamification scenarios would be done over longer period and within the course learning outcomes.

Have you used gamification in the classroom? Have some feedback on the methods we used? I would love to hear from you comment or get in touch!

INSPIRATION AND LINKS:

Some websites on active learning and gamification in education:

The BEER game.

For learning about Logistics. The purpose of the game is to coordinate and cooperate retailers and producers to supply for a demand.

https://en.wikipedia.org/wiki/Beer_distribution_game

What is Dungeon and Dragons:

http://dnd.wizards.com/dungeons-and-dragons/what-is-dd

Dungeons and Dragons for education:
http://dnd.wizards.com/articles/features/dd-classroom

Fantasy-based learning and gamification for maths
https://www.khanacademy.org/

Paper about active learning:

http://onlinelibrary.wiley.com/doi/10.1002/j.2168-9830.2004.tb00809.x/epdf

Our PowerPoint Presentation

Role-playing in teaching Presentation

Advertisements

How to Build Dog Detection Systems with Arduino

If you love technology and dogs like me, then you’ve probably at some point wondered how to combine the two. With Arduino modules becoming ever more affordable and accessible, this blog post gives an account of using Passive Infrared (PIR), ultrasound and Infrared (IR) range sensors with these microcontrolers to enable technology to detect their pet pooch. I call this crazy techno-dog adventure Doguino!

Before setting off on a mad buying rampage, I researched and found that others had similar ideas to me, such as dog food/treat dispensers, dog petters, pet controlled doors and ball launchers. These devices seem to use a number of different sensors, such as potentiometer which allows for variable voltage as a switch, automated devices using gears and shafts based on time sensors (servo device), Radio-frequency Identification tags (RFID) on collars and buttons etc. However, I noticed that none of these devices allowed the senses of the dogs normal activities without training your pup to use buttons or clipping a device to him/her. So instead, I decided to start exploring sensing the dogs normal behaviour as an input device. By using an Ardunio Lenardo my dogs movements could become keyboard commands. These keyboard commands could then trigger videos to be played to my dog allowing my dog to become his very own TV remote. But first I had to detect my pup…

Prefer just to see the code? Skip to the bottom for download links. 

Passive Infrared sensor (PIR)

Firstly I explored using PIR electronic sensors which measured the IR light, often used in motion detection systems. PIR work by detecting temperature, heat energy, in the form of radiation given off, or reflected by, objects. For my Doguino project, I thought maybe I could use two PIRs, one to detect a dogs movement and one to detect an owner. This idea was so that a system would only work if a dog was detected so as not to distract from our play time. The device I chose to use was an HC-SR501 body sensor module (below) which has adjustable sensitivity and output timing (signalling) modules (the orange bits!).

Image result for HC-SR501

As this device has a 110 cone angle range, one of the modules would need to have cones to reduce this range so that two could be used together: one for dog and one for human. The summary of the code (full version can be downloaded below) is that when PIR labelled ‘human’ is triggered that human motion is present, when PIR labelled ‘dog’ is triggered dog motion is present.

Building

pir_dog-e1504299617296.jpg

Whilst testing this device, it was found that often the two PIRs would often not signal at the same time resulting in a series of quick switches between the human and/or dog states. To combat this the system was debounced, which means the dog and/or human had to be detected for a certain period of time. This system was then tested further by building cardboard cones so that the two could sense at different heights – one at low dog height and one at a higher human height to distinguish between the two different motions.

Testing

testing_PIR

This system was then tested with my dog Zack. It was noticed that when my dog was watching the TV device he would often pause whilst watching (not move) which would turn off the device as it made the presumption that without movement the dog was not there. For this reason, the device created as not suitable for DCI with media but could be used in other projects where movement is necessary as an input. Drats – onto ultrasound!

Ultrasound Detection

ultrasoundThe second input method I tried was Ultrasound detection with a 4 meter range. Ultrasound work on sound waves with higher frequency than the upper audible limit of human hearing. For dogs the hearing range is variable on the breed and age, and is thought to be 67 Hz to 40 kHZ but has been argued to be as high as 50 kHz and 45 kHz. Most ultrasonic sensors worked on 40 kHZ, including the one I got hold of (HC-SR04). This means that some dogs may be able to hear this sound.

Building

For this reason, the ultrasound device was tested with my dog first to see if his ears pricked at the sound. As Zack was almost six his hearing had lessened slightly with age and my dog, outwardly, appeared to not react to the device – but I was still precarious. The HC-SR04 device used, has a ranging angle of 15 degrees which was rather small cone comparable to the above PIR. For this reason, the device was placed at, once again, the dog’s body height to achieve maximum refraction from a solid object.

Testing

zack.jpg

Whilst testing this device it was noticed, suspected due to my dogs fur, the device would often present null (nothing detected) even though my dog was there. To combat this, I then set the program to take five readings and average these out to give an average reading, which due to Arduinos fast speed loops (program running the code) was fast enough for real-time detection. This however, was slowed down when the null reading was given to 1 per second which reduced the accuracy of the detection. This was then, once again, tested with my dog.

It was found that even with the averaging of the code, because the program was running so fast the null readings still resulted in a flicker effect (quick on and off change) crashing the program with too fast keyboard commands. This is suspected, as above, due to the dogs’ fur muffling the sound refractions. This could have been reduced by materials that absorb sound like steel and plastic but short of building my dog a reflective vest I did not see this as a plausible solution.

Infrared (IR) Range Detection

Lastly, Infrared (IR) Range Detection was explored to try to sense my dog. I decided to use a Sharp GP2Y0A02YK analog system with a range of 20-150 cm as it had a fixed detector and accurate distance calculations.

Building

I noticed that IR is reduced by a dogs fur retaining heat and thus the IR waves, but I did not think this would be an issue as the PIR sensors (above) had already successfully detected my dog. The only issue I did think might occur is that due to the lack of fresnel lens, the angle of reflection is minimal and smaller that the previous two options – so I doubled up with two sensors!

ir.jpg

Testing

zacktestDuring testing it was found once again, that occasionally the detector would give misfired readings (12% of readings give values with an error up to 42% over actual distance). So as with PIR and Ultrasound, the readings were averaged out, following the guidelines of the last 25 readings with an average of 93 difference. To ensure a fast reaction, in this program no delays, were used. In testing, with this setup worked but due to the small angle of the range finder this meant there was only a precise width that the dog could stand in which was extended by putting the two sensors so that they exactly fit the dogs largest part of his shoulders to give the largest detection area. However, a new problem was now discovered: my dog’s tail! I found that Zack’s wagging tail would result in the dog being quickly detected and then undetected. To fix this I made the Arduino program to maintain detection for a minimum period of time (3 seconds) to mitigate against the quick waggs.

Success! This system worked perfectly allowing the dogs own movements to be translated into keyboard commands as a detection system. However, more importantly a number of guidelines were formed for mine and Zacks  future Doguino adventures.

Guidelines

  • Average Out Results. Within sensing products there is always an element of false readings, as discussed above with debouncing and error readings. To combat this  I would advise Doguino devices which use sensing products to average out the ir readings. There is however, no firm number to average by, as seen here the range is 2 – 25, as the averaging of the readings is dependent on the speed of the reading input (the loop). This averaging out is even more important if the sensing device is to act like a switch which relies on accurate readings.
  • Test first in laboratory settings, then with a test dog: User testing is key! When building Doguino products, it became important to first test the set up in an artificial setting (with myself!). As proven above, this is where a majority of problems in the systems built were spotted. Whilst this does not devalue user testing with a dog, this method ensures that during user testing the system is up to specification.
  • Sensors work best at dogs’ body height. As sensors work by detecting some form of waves (IR, Ultrasound and RF) which either required reflection or omission, the sensors within these projects was found to work best at the height of the body of the dog. This provided the biggest static area.
  • Taking into account the dogs physiology. As Arduino products and projects are often built for humans the dogs physiology needs to be taken into account in Doguino projects. Instances in these projects which affected the design was the dogs fur, hearing and tail waggs.

 

Download Links

Dog detection using PIRs: Arduino or text file format

Dog detection using ultrasonic: Arduino or text file format

Dog detection using IR: Arduino  or text file format

 

Working on any quirky Arduino or technology projects with your dog? I would love to hear from you! 

 

Designing Technology for Dogs

The field of animal-computer interaction (ACI) studies how to create interfaces and methods that allow animals to use technology. This one-day workshop held in ATLAS at The University of Boulder Colorado explored the field of ACI, focusing specifically on dog-computer interaction (DCI) and how to design dog-computer products that address the needs, wants and limitations of the dog end-user. This blog gives the findings from the workshop as well as materials to those who attended.

Workshop Activities:
• Design a remote-control product for a dog user, based on the persona tool provided.
• Redesign the remote control product to be user-centric for dogs.
• Critically assess the effectiveness of product designs.

Useful links:

Persona set and blank personas

Persona data

Psychophysics of Dogs

Lecture Slides

Event Information

Workshop Output – Task One

Four teams of dog technology developers were made and set about on the first task of designing a dog remote to allow a dog to turn on and off a TV device and change the channel. These designs were based upon the personas given (see above for download link).

AARF – Alleviating Ageing Ramifications For… Senior Pups 

This group made a dog bed that turned on the TV device when the dog was sat in the bed. The bed also sensed the dogs biofeedback to provide medical feedback for the owner with stress and anxiety.

AARF1

Team Two

This team used the puppy and rescue personas as they were both food motivated, new to their home and both dogs were young. The designs made by Team Two were of a TV device with a rope pull toy which allows messages to be sent to the TV device and to the human. As part of this system, by pulling another rope the harness the dog would be wearing would be squeezed to simulate being hugged. To further aim this device around the dog the two rope pullies included different smells, one of the owner and one of chicken. Lastly, the device also had a 3 minute turn inbuilt so that if a dog was not paying attention to the screen the device would turn off.

TeamTwo1

Jaystuff

This remote system was based upon a one button ball control joystick based system. The ball system would admit a tone associated with the channel change so that a dog could learn to associate these sounds. This team also put forward the idea of having alternate accessories such as balls.

jaystuff1

jaystuff11

Nuzzleflicks

Nuzzleflicks made a nose touch sensor pressure plate to allow the dogs nose to change the TV channel. They had a strong design focus of playful technology and wanted there system to be adaptable for the dog based upon the differences spotted within the personas.

nuzzleflicks1

Task Two

Task two was to redesign these remotes based upon the gathered knowledge around the philosophical stance of dog centred computing. There were certain limitations put upon the designers such as no use of buttons, the system to be required to fit in with the dogs senses (see psychophysics of dogs sheet above).

AARF – Alleviating Ageing Ramifications For… Senior Pups 

AARF turned there buttons into sniff bins instead where the smell correlated with the TV being shown (i.e. if bunnies then the smell outputting would be of rabbits). They also introduced two beds, one fitting with the device and one not so that the dog could choose to use the technology as a form of consent. This team also wanted to include machine learning within the dog bed to learn what the dog wants over time.

AARF2

Team Two

Team Two added more tactile input features to the pullie roles by adding bumps to them. They also wanted to add in machine learning features that went off biosensors feedback from the dog so that the TV would avoid showing media to the dog that he/she does not like.

TeamTwo2

Jaystuff

Team Jaystuff where concerned with the issue of immediate feedback and this impact between the space of playing with the toy and the changing of the TV channel. They also realised there was a dilemma between the dog enjoying playing with the ball toy and watching the TV where the dog may want to play but not change the channel. They also spoke about the frustration the dog may face that the ball can not fully be flicked and mentioned to be fully dog centric and take the human out of the picture. They indicated this could be done by allowing the ball to hit into the TV to stop the device (but noted that the human would get frustrated!) or loop under the floor so that the dog will not be distracted by the ball when he/she is paying attention to a TV show. Lastly Jaystuff also spoke about adding in Fitbark to help inform the system.

jaystuff2jaystuff22

Nuzzleflicks

Nuzzleflicks revamped their design by creating a dog-bone style remote that was chewable and incorporated smells that a dog would enjoy. They chose to decorate there design in colours suitable for a dog and flavoured the bone. Lastly to promote longevity of the remote they suggested Nuzzleflicks have removable sleeves that the owner could buy different ones to attract their dog individually.

nuzzleflicks2

Workshop Findings 

Feeding back the participants spoke about how it can be hard to make things dog centric, as we are not dogs, enjoyed learning about different animal technologies and of course designed lovely new dog screen products!

News article on workshop

 

International Conference on Animal Computer Interaction (ACI) 2016

After 3 years of ACI conferences, this was the first year ACI has branched out forming its own conference supported by ACM. With two workshops over the first day, full and short papers, extended abstracts and posters this year’s ACI2016 conference brought out the experts in the new and upcoming field of animal computing.

This blog contains a brief summary, supported where given permission from the author by papers and presentations to allow the further dissemination of the brilliant work presented.

 


Workshops


Workshop: Methods of ACI (RM4ACI)

Organisers: Anna Zamansky and Amanda Roshier

Workshop: Don’t cut to the chase: Hunting experience for Zoo Animals and Visitors

Organisers: Fiona French, Mark Kingston-Jones, Mark Campbell, Sarah Webber, Heli Vaataja and David Schaller

Vimo Video: Big Cat Hunting Enrichment at ZooJam 2016

Vimo Video: Enrichment for sea lions at ZooJam 2016


Paper Presentations


Search and Rescue: Dog and Handler Collaboration Through Wearable and Mobile Interfaces

Clint Zeagler, Ceara Byrne, Giancarlo Valentin, Larry Freil, Eric Kidder, James Crough, Thad Starner and Melody Jackson.

Zeagler and Byrne gave their presentation on thinking how a dog and human interact by suggesting that they could map a device for humans to check on a dogs location with vests using Nelsons Heurists. This technology could be used by users of search and rescue dogs to find and record the location of the search and rescue dog quickly. They presented an app and spoke about the work they had undertaken with end-users to iteratively continually design their system. Such requirements that came out of these discussions, were around waterproofing the device to allow for dogs swimming, making padding for the vest that the dogs wore and making the vest bright to allow hunters to see the dog.

The Impact of Training Approaches on Experimental Set up and Design of Wearable Vibrotactiles for Hunting Dogs

Ann Morrison, Rune Heide Moller, Cristina Manresa-Yee and Neda Eshraghi

Morrison presented an interesting presentation about creating vibrotactiles for hunting dogs brought on from requirements she faced with her own experiences with her dog. She agreed with others in the field, that this was collaborative work, similar to how she hunts with her dog in collaboration. She presented her initial studies of testing a vibrotactile vests with the dog giving a signal and then choosing the direction based off this vibrating signal. Questions were asked to Morrison about if she insured that the owner, or person within the study, gave any meta clues to the dog on the choice. This is because dogs have been shown to follow human given cues in tasks (e.g. gaze and hand gestures). Morrison said that she records all the interaction to check but also make sure that the person involved within the study does not give any clues.

Canine Computer Interaction: Towards Designing a Touchscreen Interface for Working Dogs

Clint Zeagler, Jay Zuerndorfer, Andrea Lau, Larry Freil, Scott Gilliland, Thad Starner, Melody Jackson

Zeagler presented his work on creating a nose touch screen alert system for dogs where they had to nose-swipe two dots for a successful activation. This was based upon exploration of assessing Fitts Law within Dog Computer Interaction (DCI). This work reminded me initially of Charlotte Robinsons early work on alert interfaces where she used a tug toy system. The talk also spoke about different training mechanisms used by Zeagler to train the dog to use the system for the most effective method, such as back-training. Zeagler presented basic guidelines to the ACI community on the distance needed between two interactive dots to enable successful activation interaction. he also put forward the next step of presenting a system, from these findings, which a dogs nose could swipe to call emergency services. Questions were asked to Zeagler about if the dog size influenced the gap requirement between the two dots. He stated that similarly to my work, the screen was places at dogs head height but he was sure this size would change dependable on the dogs size ability to swipe (e.g. smaller dogs smaller space between the two dots). Suggestions where also put forward of create a database for animal factors similar to human factors. Lastly, questions were also asked about why Zeagler chose Fitts law to measure within DCI and if there are any other measurements other than Fitts law. Zeagler then spoke about physical form factors which can lead to usability factors.

Sound to your Objects: A Novel Design Approach to Evaluate Orangutans’ Interest in Sound-based Stimuli

Patricia Pons, Marus Carter and Javier Jaen

Pons gave a presentation about her exploration with Orangutans and creating a ball that plays different programmable sounds on the balls movement tracked using a Kinect Depth sensor. She presented her initial findings about how this ball was used. Pons found that due to the Kinect being placed against the bars of the cage as a sensor the Orangutan instead tried to destroy the unfamiliar object with often the Orangutan being too far away from the Kinect sensor. Questions were asked of Pons about if she could instrument the ball itself so that the sound could be made directly from the object and problems would not be faced over the distance with the Kinect sensor. Pons stated that whilst this would be the ideal scenario due to welfare regulations it is not possible to introduce new technology into the enclosure and instead her method was a way of further enriching the already available toys.

Exploring Methods for Interaction Design with Animals: a Case Study with Valli

Fiona French, Clara Mancini and Helen Sharp

French presented her work that she had undertaken with Elephant users at the zoo for developing guidelines for Elephant ACI. French spoke about how she had made frames for the elephants to use, with each study giving more information about what the Elephants can interaction with and what devices as feasible. French was particularly interested in the aesthetics and looked at vibrating motors as a feedback interface to the Elephants trunk touch. She then spoke about using different materials within the device as they have different smells. French also investigated an automatic hose device and found that whilst initially elephants did not like this device (she presented a video of the Elephant running away) once the Elephant was wet they would not run from this device. As such, this requirement formed part of usability of Elephant ACI systems. French was also interested in acoustics stimulation and planned to explore this further. Questions were asked of French about how she came up with the sounds for the acoustic device and why French used treats to get the Elephants to initially use the device as this encourages the Elephant to use the device.

Towards Multispecies Interaction Environments: Extending Accessibility to Canine Users

Clara Mancini, Sha Li, Grainne O’Connor, Jose Valencia, Duncan Edwards and Helen McCain ****

Mancini presented her work on supporting assistance dogs and how to extend the accessibility of ACI systems to dog users. Mancini firstly spoke about the role that dogs play within society and how their jobs are often not made any easier by the systems that they use. She stated two main issues; firstly that assistant dogs often work under conditions that humans use so are inappropriate and secondly performance issues. Her talk began   to look at creating buttons for an assistance dog to allow it to assist its owner more efficiently with a dog-designed system. An example of a system that an assistance dog user that Mancini looked at was opening doors and turning on lights. She created two buttons, one blue and one yellow, to do these two tasks. This was undertaken over an ethnographic study to look at the controls that dogs have to operator. Mancini found that dogs did not like certain material, such as ones that they could not do with their snout comfortably and could become confusing interface to use. She stated that she used colours that the dog could distinguish clearly and explore whether sounds should be included within the device to provide the user a sense of being aware of the design both sensory and cognitively. Through this Mancini found that some devices were easier to engage with and that mapping is not always something a dog can do within a system like a human can. She then split the usefulness of dogs mapping into three things; symbols (that you can teach a dog), icons (recognised representations) and indexes (associative learning). Mancini then spoke about that if a dog was unable to perceive an element of it then it can not make sense of the device, however through consistency a dog is able to make sense of the device. She suggested that the more an object behaved as an index the more trustworthy the device became through affordance and feedback which became important factors in ACI. Mancini stated that whilst dogs are unable to understanding mapping and constraints they are able to understand perceivability, consistence, affordance and feedback with these things being arguable more towards a dogs own affordances. Mancini expressed that this is just her hypothesis of how these principals might be more relevant. Questions were asked of Mancini about co-evolving within this user case study, where she stated that she does not know what the meaning might be for the dog by indexical and symbolitc learning does occur. The model she presented is a conceptual model but it provides the way of reasoning towards affordance. In this way the principals presented within this study are more important towards animals where ACI researchers should be asking why certain principals are occurring.

Plant-Computer Interaction, Beauty and Dissemination
Fredrik Aspling, Jinyi Wang and Oskar Juhlin

Aspling gave a really interesting talk about ACI within the realm of plants and beauty. Apsling stated that plants are often seen as the mute and immobile bit of the world passing by and nearly their as an object, however he argued for systems within ACI for them. He suggested that Plants have some sort of intelligence but perhaps more in an alien manner where they have been seen to interact with the environment and solve different sorts of problems such as sensing water, ability to learn etc. all without a brain. He then suggested that we can also turn to biology to look at this phenomenon where every subject is an object and every object is a subject often through unaware interactions. An example of this interaction he gave was where humans are just like bees where they are being used by plants for dissemination. His work questions that if plants are users within ACI systems what would they want. He put forward that could be done by an ethnography of ordinary human-plant interaction, plants in computer systems and also looking at plants in design and architecture. I personally really enjoyed this work as it was on the fringes of what I would consider ACI. In this work, he proposed to move more into the realm of looking at plants as input devices, biometric systems and nurturing systems, where he thinks he will assess plants natural ability as biosensors.

Becoming With: Towards the Inclusion of Animals as Participants in Design Processes
Michelle Westerlaken and Stefano Gualeni

Westerlaken gave her talk focusing around how to include animals within ACI as participants including theoretical notions from areas like posthumanism and critical animal studies. She focused this around Haraway’s concept of ‘becoming with‘ an animal and how you can invite animals to become a participant within the design process through exploring playful encounters. She spoke about how situated knowledges are generated from these studies and can transform and inspire designer to offer up alternatives to the ACI community. Westerlaken then presented some user studies she had conducted with her own two dogs on textures on moving balls to open up the design space in ACI. Questions were asked to Westerlaken about if she believes that participatory design applies to animals, to which she answered that she does not care so much about our human tendency to try and force-fit human-centred terminology and methods to design with animals. She rather proposes to explore what design methods do work with animals and try to develop frameworks and theoretical notions around those. This then opened into a talk about recommendations that Westerlaken could give to the ACI community for the future and how she could quantify good and bad. Westerlaken answered that she thinks that design practice often arises out of intuition and creative efforts that are not easily turned into design principles within the broad field of ACI. While the community appreciated this exploratory effort suggestions were made to Westerlaken to start quantifying this usage, by stating that our major issues and the essence of measuring welfare arise from these challenges.

Unleashed Enthusiasm: Ethical Reflections on Harms, Benefits, and Animal-Centered Aims of ACI
Katherine Grillaert and Samuel Camenzind

This talk, given by Grillaert, spoke about the ethical reflections on the harms, benefits and animal-centric aims of ACI, through Grillaert reflecting on the ethical position of work undertaken within this field. Her talk focused mostly on the conflicts that ACI technology faces in the non-speciest approach advocated on the ACI blog that suggested within ACI we should treat animals as individuals according to his or her own needs. Within her talk she also mentioned that if technology is truly an ACI ‘thing’ then we should use the animals for non-work. She then spoke further into these conflicts by speaking about motivation tests where an animal is tested to see how much work it would do (e.g. weight lifted) to get to an item. She questioned if this was ethical, and could these responses be comparable between say a human and a cat. Grillaert also noted that there is a potential to increase fear through technology being introduced and then taken away. Lastly Grillaert spoke about how from her perspective of ACI there are areas within the methodology that could be improved, such as last of data freely available in ACI. Questions were asked to Grillaert about if she saw ACI contributing to new ways of assessing to animal welfare, which she did and noted the potential.

 


Doctoral Consortium


Integrative Multimodal Play for Elephants

Fiona French

French looked for gap in the experience between captive and wild elephants to help build enriching interfaces for captive elephants. With a majority of her work looking at a singular elephant named Valli held in a buddest temple in wales, French looked into the missing requirements that Valli needed and was missing within her captive environment. Whilst she initially thought of a calling game to enable elephants to remotely speak (there were jokes from the audience made about an elephone!), French decided upon moving away from this initial idea into exploring toys. She has begun to explore using different textiles and is looking into exploring acoustics and haptics for the elephant. Questions asked to French were around who she was designing for, with the community feeling that it is OK to design for one elephant but then re-purpose things in another context for another user. Another question was about the terminology ‘showing interest’ and what does this mean within ACI systems. French answered that this is currently done through both Zoo Keepers and video footage analysis with the suggestion of measuring time in classifying these behaviours to give more qualitative data. Lastly questions were asked from a behaviourist about why French was trying to mimic a biological need as it can never be met or mimicked as she is not an elephant where the behaviourist suggested that instead play should be used as a primary enforcer.

 Designing for the Other: Views on Selves, Playfulness, Multispecies Philosophy

Michelle Westerlaken

Westerlaken presented her PhD work focusing around her ideology approach to ACI calling for a more critical approach to designing for and with animals. Her research focuses on the practice of “doing multispecies philosophy” through the design of playful artefacts with animals, with the aim to rethink anthropocentric practices that oppress and exploit animals. Suggestions were given here by her mentor on suggesting that first she formulates a theory about ACI and then find a methodological approach by starting with technology. This would allow an improved developed methodology in articulating the interaction. The audience also suggested using dogs as there had been a strong history of exploration and would not conflict with Westerlakens ethical values. Dogs are also inherent at co-operation with dogs and humans often exhibiting already a level of play which Westerlaken can build from.

Enabling Embodied Interactions in Playful Environments

Patricia Pons

Pons PhD work has so far centred around modelling animals within computer systems for automatic recognition, mostly focused around cats but also testing on Orangutans and big cats. Pons aimed to have a fully working system that could recognise body language. Queries were given to Pons about fears the community had on the body positions being recognised being individualistic. Pons felt that the recognition of body language context could be done through gathering information from the animal device itself.

Biometric Interfaces for Dogs

Ceara Byrne

Byrnes PhD is interested in looking at sensors to see which one for dogs is more effective. She sees her work as an interdisciplinary approach to push knowledge held within the field back into other field to create a more combined shared foundation. Questions asked to Ceara were about the huge scope of her work, and by perhaps scaling it down would help her further define her goals.

 Designing for Wearability in Animal Biotelemetry

Patrizia Paci

Paci looks at developing frameworks and measurements to help develop animal biotelemetry tools for animal monitoring and welfare instances. She is interested in wearability of this device, as these devices can bias the normal behaviour when unsuitable. Paci is also interested in requirement gathering in a wearer centred framework. Questions were asked from the audience about where she was building up the framework from. Paci is building up the analysis from guidelines that began in animal welfare and from this iterating into practice to refine the framework built. In this way the framework informs the prototype.

Creating Meaningful Interactions with Dogs and Screens

Ilyena Hirskyj-Douglas || Download Presentation

15151542_10154937742924767_1018266146_nHirskyj-Douglas presented her current PhD work within various studies and theories about a dog’s interaction with screen devices and how best to support these practices. She spoke about her research goals and current studies from the conception of looking at what media a dog enjoys watching to tracking a dogs’ vision across multiple screens. Questions were asked to Hirskyj-Douglas about how she measured a dogs’ gaze, which she answered through video analysis and researchers’ judgement. The community also spoke about how within dogs’ vision there is such variability among breeds and this might have an impact upon the findings which Hirskyj-Douglas acknowledged and spoke about how breed pairing in studies can prevent this variability. An audience member also asked Hirskyj-Douglas about if she monitors the owners to which she answered that she tells the owner not to watch the screen to ensure that a dog is not following the humans vision and that the human is told not to encourage the watching behaviour. The audience felt that Hirskyj-Douglas work had great potential for enrichment projects in the future. A comment was also made about Hirskyj-Douglas work being more ethnographic than participatory as it seeks to understand dogs within an ordinary environment. From this discussion a conversation ensued about the definitions of training and what is training within ACI systems.


Extended Abstracts


 Training Collar-sensed Gestures for Canine Communication

Joelle Alcaidinho, Giancarlo Valentin, Gregory Abowd and Melody Jackson.

 Alcaidinho presented her work about how to train dogs for gestures and showed initially that dogs are able to follow gestures on cue, such as pulling on a tug system. This work was centred around research of a vest which dogs are able to use to alert people, such as search and rescue of a lost child. A system like this could be activated by a dog to enable the rescuer to know the persons location whilst enabling the dog to continually follow the person. So far this system design put forward the idea that different dogs require different activation systems, one was under the dogs chin attached to his chest, whilst the other was on the side, dependable on the dog, owner and environment. Another requirement that came from this study was the importance of working with the dogs and people involved in this system to help build up requirement sets. Lastly, the device also had to give an activation to the dog so that the dog had a form of feedback that the device had been activated to prevent the dog from continually trying to activate the device.

Semi-Supervised Classification of Static Canine Postures Using the Microsoft Kinect

Sean Mealin, Ignacio X Domiguez and David L Robberts

Mealin presented his work, which bared resemblance to Pratrica Pons work, on using Microsoft Kinect as a depth Sensor. In this, he was looking at creating a device that recognising a dogs different postures allowing for posture recognition without emphasis on wearable technology. He presented his work he had done so far with a few dogs, through very small classifications, that his system was able to recognise if a dog was sitting down. Questions were asked of Mealin to see if there are certain dogs that it was easier to identify body position, and he spoke about a dog with a curly tail tight to its back being harder to track as it is not atypical dog body. He then spoke about how the depth images tells what position the dog is in by using depth data to provide labels.

Designing for Wearability in Animal Biotelemetry

Patrizia Paci, Clara Mancini and Blaine Price

Paci described in her talk the work she has currently undertaken on the usability of cats and collars. She created an ethogram of behaviours to map the we arability of the technology through a model created itterativly as her work progresses. Questions were asked about if this work followed Grounded Theory (GTM) principals but the community felt instead this was more ethogram.

Exploring Human Perceptions of Dog-Tablet Playful Interactions

Anna Zamansky, Sofya Baskin and Vitaliya Kononova

Zamansky presented her work on perceptions of dog tablet interactions which she spoke about humans judging dog interaction into good and bad interactions based upon videos of dog behaviour.

A Dog Using Skype

Alexandre Pongracz Rossi and Sarah Michelle Rodriguez

Rossi presented his system where he had trained his dog to react to commands given over Skype. If the commands were followed by the dog a 3D printed device that he had made, would then dispense treats to the dog. This work was controversial, to myself at least, as I felt at times that this was the circus act side of ACI where the only benefit of this system was for the humans entertainment.

 Dog-Drone Interaction: Towards an ACI Perspective

Anna Zamansky

Zamansky presented her work on investigating dog-drone interaction and questioned if there is a safe way to create dog-drone interaction as this often can be precarious.  Examples were given about how dogs use drone devices with videos shown of dogs actively destroying drone devices used.

Aquarium Fish Talks about Its Mind for Breeding Support.

Naohiro Isokawa, Yuuki Nishiyama, Tadashi Okoshi, Jin Nakazawa, Kazunori Takashio and Hideyuki Tokuda. 

Isokawa presented his work on tracking fish within a tank to give the user a message about how the fish is doing. This would give messages such as ‘I am hungry’ if they had not seen the fish doing eating movements and gave the user a better way to interaction with their fish.

De-computing the Pigeon Sensorium
John Fass and Kevin Walker

Fass presented his work that him and his students had undertaken around designing within ACI experimental and exploratory design. He mentioned that this work fit into the third paradime position between design not focusing on the mediums instead transforming data into design. This work presented different wearables to give the user the visual perception of a pigeon. This was from the notion that we might be able to draw more on the senses of animals beyond just putting tablets in front of animals.

 

 



Key Note


 

Computer use by non-humans to improve their welfare?

David Broom

The closing keynote was given by Broom about ACI to improve welfare. In his talk Broom spoke about welfare, history of computer welfare research and environmental controls. He stated that if animals can predict what is going to happen when they are going to be better at choosing what is to happen, as natural selection would favour those that can predict. An example was given where is rats were put in situations where they could not prevent themselves getting an electric shock and did not know when this was going to happen they would get sick, and as such unpredictability in ACI can be a problem. He then spoke about measuring the strength of preference and how computers can be used in collecting information which in the long run can be used for animal welfare. He stated that in animal welfare we often used sentience where we tend to split animals into those which we care about and those which we do not, we device animals into those which have feelings and which ones that do not. In this way it affects our moral discussion over what is sentient and what is not – which ones are? What is sentience? If a person is brain damaged when does sentience case to exist? Are embryos sentience? There is clearly a time when humans are not sentient and as such do animals move through these same parallels. He then lead into questions about how to measure an animals learning, in which how you ask the question of an animal often alters what answer you get and as such you need to be careful about the questions you ask. In this manner you can never be sure if things happen because of the animal or the experimenter. The treatability of animals within ACI is important to welfare, however as a group animals behave differently. Whilst these are interesting questions Broom stated that we often do not know most of these answers, such as what is enrichment? Does the transference of enrichment transfer across to animals? He suggested this can be partially answered by looking at what animals would work for; how much would they do to get X? Broom stated that some people are worried about the level of control an animal has and what is the overall impact of this control, drawing back to predictability being an important factor in ACI.

Questions were asked to Broom around how should the welfare be focused around biological problems? To which he replied that it should be biologically focused with the adaptability of the animal and the biological function being important. This does not mean that the animal has to have a natural environment, you do not need to have an identical environment to nature instead focus on giving the animal what they need. For instance most animals kept within the wild we have a lot of information, so because of this we can use that information when keeping the animal in captivity. This can often be harder in Zoos as we often do not know this information, in many cases we do not know enough. In some cases these are not solvable problems. In this instance knowing about biological function is important but mimicking their natural environment is not.

Questions were also asked of Broom around where is greater agency in ACI and have we as a community been skirting around this? Broom stated that if we are trying in ACI to improve animal welfare the best animals to work with are those with the biggest problem, such as chickens. Broom felt this was the biggest problems are those which are present in the largest numbers with the biggest amount of problems. Most of ACI current work has been done with pets and zoo animals however maybe there should be more of a focus on farm animals.

Lastly, questions were asked of Broom of the future of Zoos to which he said that consumers often control the outcome of zoos through their opinion of zoos which has recently changed with a more general awareness of animal welfare.


Overall ACI2016 was a lovely experience pulling a multitude of researchers across the world, both from computing, engineering, design, and animal behaviour to create a multidisciplinary conference. Whilst this conference had a more animal behaviourist feel to it than the previous two years (1st and 2nd) this did show both the development of the community and the increased awareness of a need to merge the ACI field into more disciplines other than computing and engineering.

15127442_1192428040824879_1825556188_o

 

ACI at Measuring Behaviour 2016

On the 26th May, 2016, in Dublin, Measuring Behavior hosted a workshop on Animal Computer Interaction (ACI) (previously blogged). The workshop was organised by Myself, Anton Nijholt, Patricia Pons and Adrian D. Cheok.

bah
logo

The aims of this Symposium were twofold. Firstly we wanted to introduce the topic animal-computer interaction in the Measuring Behavior community with the assumption that there can be fruitful interaction between these two communities. Secondly, we aimed at contributions that address methodological questions. How to conduct user-centred design in animal-computer interaction or in computer-mediated human-animal interaction? Which methodologies from HCI can be adapted to ACI? Clearly, in this emerging field of research case studies can help to give answers to these questions and they are welcomed as well.

In this blog post, I plan to give you a taste of the work presented as well as talk about the questions raised during this workshop.

 

 


Animal-Computer Interaction: Animal-Centred, Participatory, and Playful Design.
antonThis workshop started with a talk by Anton Nijholt, welcoming everyone to the conference and introducing everyone to ACI, Dublin and Measuring Behavior. Within his talk he spoke about Claras Mancinis definition of Animal Computer Interaction. He then went to speak about the previous workshops and events done such as ACI@BHCI, The first and Second Symposium and introduced this workshop as another step for ACI.


Detecting Animals’ Body Postures Using Depth-Based Tracking Systems.
ajhaThis next presentation was on Patricia Pons, J. Jaen and A. Catala paper talking about depth-based tracking systems with XBox presented by Alejandro Catala. Here he explained the motivation behind their work and spoke about the growing interest in mapping behavioural patterns. Through this mapping he stated that there are a lot of possibilities that a system, like the one made by them, could bring. Examples given where the computers ability to automatically recognise behaviour and create a interactive playful environment. Their work was primarily interested in posture recognition, which currently they had a 90% recognition rate. They hope this work can be used in Kennels and rescue centres to help learn about animal behaviour and have tried there system with Lions and orangutans in zoos. However, they felt that for orangutans this would not be the best approach and humans methods are preferred. They ended with talking about currently collaborating with specialists to see posture behaviours.

Questions asked:

Q: Would you not need perspective inverse vision to allow for correlated vision?

A: It is not needed using the tracking area

Q: Do you consider using wither cues such as colours as an additional set of features to distinguish between animals?

A: We are proposing this under a controlled scenario. It can be hard to maintain colours especially when doing other animals

Q: A cat is rather big considering the size of your tracking area (270 x 250cm). Do you face issues with the size of your tracking area, and could you use this with other animals, such as mice and insects?

A: This could be possible but you would need to define new features. (Fiona French then stated that you could mount the Kinect on a drone to allow continue tracking and not be limited by space – loved this idea!).

Find out more about Patricia Pons work on her website and her publications on ResearchGate.


Animal-Human Digital Interface: Can Animals Collaborate with Artificial Presences?
ohtaProf Naohisa Ohta then presented his paper he wrote with Hiroki Nishino, Akihiko Takashima and Adrian David Cheok on animals using visual computer machinery. Prof Ohta, is new to the field of ACI, previously being involved in computer vision. He started his talk by speaking about the many players in Animal-Human interaction and how we can share the same world, even if how we experience it is different. This opened up to a discussion about the limited spaces between humans and animals, and with both experiencing this space differently. From this, he said him and his students have been doing private experiments for a while with their dogs where they would Skype them. Often the dogs would recognise the owners voice on Skype and look around for their owner. His own dog, would look out the window. So it is through creating interactions between dogs and technology that his long term goals lye. He stated he did not want to make technology like Pawculus Rift (below) but instead focus on a visual interface screen. He talked about how people visually see things, and whilst 2D sometimes work well he wanted to make a device with dynamic motion cues for dogs. The challenge he currently faces is, whilst tracking technology is available in humans, it is currently not fully functional in dogs.
Pawculus Rift a Virtual Reality April FoolsI personally really enjoyed this talk, and as someone who is involved in dogs, tracking and media technology, I can see the value on such a product not only for home alone dogs, but also kennelled. After the conference I had a talk with Prof Ohta about his ideas on using this system as a ‘window’ to not only see other things but smell and create a truly interactive product – maybe even with dog-dog interaction. I look forward to his future publications!

 

Questions asked:

Q: Would you think of adding Haptics into this?

A: No he had not planned to do that.

Video of an example of the VR system Prof Ohta was looking to implement found here. 


Towards a Wearer-Centred Framework for Animal Biotelemetry.
patricPatrizia Paci presented her paper she wrote with Clara Mancini & B.A. Price on her framework she developed for animals working with biotelemetry devices such as activity trackers etc. She started the talk about the rise of such devices and the implications this can have on the health of the animal. The design of the framework for these devices focuses on two things: features and methods. As an example, she mentioned about animals wearing brightly coloured collars causing them to stand out. At this point I got reminded of the animals in the film Happy Feet who when tagged, believed it to be something
special. She then told a nice example of how wearables need to be centred around the animal by showing the different frequencies a cat and mouse can both hear. A device would need to avoid these both so that the cat can do normal behaviour and hunt mice. She then went on to talk about a study she conducted with cats wearing different devices and measuring their behaviour. The three devices where controlled (nothing), an activity monitor (10g) and a GPS tracker (40g). She then videoed the cat and noticed that the cat would often try to get the heavier collar off and a correlation between the method of attachment and disturbing behaviours.  She finally presented a proposed design recommendations in a user centred framework.

2006_happy_feet_039

Questions asked:

Q: What principals do we need for user centred design and to have a good wearable experience?

A: A good experience is having no experience at all! Principals presented here not only relate with the wearer but also with other animals in relation to the animals biology. For each animal their biology would need to be considered and then centred around that particular animal. Whilst this is currently only a work in progress, the next thing we are going to explore if the use of the device. 

Q: Would you instead use biotelemetric devices under the skin?

A: No, because this is not good for animal welfare.

Q: What if the animal would wear the device for along time, so it became used to the weight and features of the device?

A: She had not considered this, as this would be conditioning towards the device going against animal centred design and hindering the normal behaviour.


Playful UX for Elephants.
fionaFiona French did a lovely presentation on her paper she wrote with Clara Mancini & Helen Sharp on playful user experiences with elephants. The aim of her work was to enhance the life of captive elephants through creating an enriched environment. However, as she pointed out, this is tension between the technology and the animal as technology is not within the ordinary environment. She saw though, that technology could give the elephant back some control over its own environment. The question she faced was about what the elephant had to play with, and how do we add this to their play space. She spoke about the different systems she had tried, from musical tubes, to tactile pads and buttons. She then stated that the keeper of the elephant that she was trying this technology with wanted a shower system so experimented with the interface. She wanted this device to add value to the animal and made a footpad (similar to a sewing machine pedal) to allow the animal to play with the device. Whilst this work is still in progress, her work took a very animal centred approach which involved not only the animal, but the designer (herself) and the zoo staff. She was making the device using an Arudino and I look forward to seeing how the elephant uses the device and what she learned from this.

Questions asked:

Q: Have you thought about using a branch as an interactive device?

A: Elephants tend to be quite destructive and using a branch would be both difficult, expensive and could possibly have potential dangers.

Q: Could you not train the elephant to use the device first instead of discovering things that are interesting to an elephant intrinsically? Would this not allow for more opportunities with this?

A: Potentially food could always be the motivator for this discovering. However it removes the intent of the animal and modifies their behaviour which we are trying to measure.

Interactive Toys for Elephants Website: http://toys4elephants.blogspot.co.uk/ 

Find out more about Fiona Frenches publications on her University site and on ResearchGate


 Animal-Computer Interaction (ACI): A survey and some suggestions.
egonEgon L. van den Broek presented his paper on guidelines for ACI from an outsiders perspective, being mainly involved in Artificial Intelligence (AI). He had some experience of human-animal interaction through a PhD student of his using horses to regulate disabled children emotions. He saw HCI as providing a fragile foundation for ACI and suggested a closed loop model. He spoke about the ease of use of a system and the different interactive styles. He stressed the need to understand the agents involved and getting different communities together. He then presented a video on monkey behaviour with a woollen and a wire ‘nursing mother’ to demonstrate some of the challenges faced in ACI; that behaviour can be temporary or indirect. His key suggestions where to take care of validity with a specific baseline. He was interested in empathy in ACI, where you can infer others feelings. The key message I took away from his talk was to employ an interdisciplinary approach to ACI; ‘ the challenge is as big as the need for animal computer interaction’. He finally presented a New Journal in Animal Sentience (ASEnt) on non-humans animal emotions.


The Ethics of How to Work with Dogs in Animal Computer Interaction.
ilyenaI presented my paper I wore with Janet Read on the ethics of working with dogs in ACI. I started off by talking about how as what we have known about animals has grown and technology has evolved, the boundaries between humans, animals and machines have blurred with the relationships between the modalities changing. This lead to a change in how humans with with animals with the old cost vs. harm ethics being replaced by an animal centred approach but still different approaches used on this linear scale: human vs. animal. I then spoke about the two ethical papers in ACI, Clara Mancinis’ and Heli Vaatajas’ papers and presented my own ethical guidelines on working with dogs in animal computer interaction, which I call Dog Computer Interaction (DCI). I spoke about how I came up with these guidelines and that I expect these to grow as DCI field, my own work and myself does. Lastly I spoke about how the research we do in ACI is influenced by our feelings around what animals are, and how this directly influences not only the study design but as such the results, methods and theories created. I ended by asking other ACI researchers to state within their work how they feel about the animals they work with to create a more in-depth approach.

Questions asked:

Q: Why not do a double blind study with animals though to stop the feelings of the owner being involved?

A: Even if the owner did not know about the study, the behaviour of the researcher who designed the study would still be influential.

Q: I found this point interesting as it goes against normal science where you are supposed to take your emotions out of the equation. Is it personal feelings you want or the statement around what you are doing as they are two different things?

A: Ideally I think both as they are interwoven with each other

Q: Yes, but you could just say your doing those guidelines and it doesn’t mean anything.

A: I agree, but by putting in a statement about how you felt about the animal it would state how you treated the animal within the study and give clues to in what mind-frame the study was designed. For example, someone doing work on dogs that loved dogs vs. someone doing work on dogs that did not like them could possible create different results from the way that they designed their work.

Paper available on ResearchGate and my presentation given available here.


Final thoughts…
auidenceOverall, the day was enjoyable and interesting to see the different opinions of standard measuring behaviour in science to the view that most researchers hold in ACI. To me, the key difference in these two fields is the definition of animal welfare. Whilst in animal science (bioscience) animal welfare is considered, in ACI this often goes beyond simply providing the animal with these needs to shifting entirely to an often seen phrase in ACI publications, animal-centred. I agree with Fiona Frenchs answer, that in order to measure true behaviour we need to not train this behaviour, and indeed this was in my publication. This user-centred approach, whilst not new in HCI is rather new in animal behaviour and as more interest grows in this field within the interdisciplinary field, as it should – and I am sure it will be, interesting to see the different theories and methods taken with residue from previous field normalities.

As an organiser of this event I would finally like to take a moment to thank all of those who presented, asked questions and attended this workshop at Measuring Behaviour, and  lastly Anton Nijholt for starting this wonderful event.

Finding out about Horse-Computer Interaction

In March earlier this year, Steve North published an article on horse-computer interaction ‘Do Androids dream of electric steeds?: the allure of horse-computer interaction’ as part of his work on HABIT: Horse Automated Behaviour Identification Tool. As someone new to the area of horse-computer interaction, I got in contact with Steve to find out more about the systems he plans to develop and his ethics and methods behind horses in ACI.

Dr Steve North
Dr Steve North (photo courtesy of The University of Nottingham)

Hi Steve, lovely to speak to you again! As someone new into Horse-ACI I saw that within your article it was written that ‘We usually require horses to interact with us through our technology’, could you please explain how currently horses interact with technology? Where is this usually done?

When we (human-animals) first started using horses for transport and for traction (to pull and push objects beyond our own strength and endurance), it was necessary to control their movement and behaviours. Horses weigh around 500KG and are many times stronger than us. Without the advantage of tool-use, early humans would have struggled to get horses to do anything that they chose not to. At this point, humans had only ‘hard’ technologies available, such as the harness, the cart and the plough. This was (largely) a one-way conversation. Horses were not asked to enter into these interactions of their own free will. Like dogs, horses are reliant on a social structure, with similarities to that of humans. Whether through choice (grazing nearer and nearer to a human encampment) or via a process of capture and domestication, horses (at the species level) had identified the advantages of partnership (reduced predation, fodder in the winter etc.). For the individual horse, this still may not have worked out very well… Maltreatment, overwork and early death would sometimes have been his or her ‘reward’ for putting the best interests of the species, before their own. However, once that pact was made, the individual horse had surrendered any notion of consent, when required to interact with our ‘hard’ technologies. This technological control of horses continues into the present day, wherever horses are used for our entertainment or to labour on our behalf.

Thank-you for that lovely explanation, I really like the link you’ve made between domesticated working animals and their evolving evolution with us. There seems to be two traps that people can fall into with domesticated animals, both for tool use and an anthropomorphic approach. You touched on this within your article, ‘Our current anthropocentric bias denies the reality that human animals are just one species in the family of animals’. How do you think it is possible to reduce this anthropocentric approach? How do you see your technology aiding this and avoiding the pitfalls of human empathy upon image recognition?

Often, it seems that humans are only able to empathise with animal suffering, when they recognise familiar behaviours indicating distress, pain and anxiety. If we can agree what constitutes an animal’s (or human’s) normal repertoire of behaviours, then the use of automatic behaviour recognition helps to provide an evidence-based approach. Rather than subjective assessments of animal welfare, we can move towards recognised benchmarks. Cruelty will be harder to justify when we remove the ‘wriggle room’ of humans being able to say things like: “There’s nothing wrong with him, he’s fine!”, or “…that species has a different nervous system to us, it’s not feeling pain in the same way that we do..”. Of course, using current technology to analyse animal behaviour is not the end position. This is just an incremental step. Hopefully, it’s a step heading in the right direction (!).

Poster on HABIT
Poster on HABIT

Automated recognition of behaviour would indeed step away from the human bias, I believe Dawkins (2004) paper suggested accompanying this behaviour with the situations that they want to get to or away from (correlation), such as using this recognition tool to help analyse their likes and dislikes. She does warn over choice analysis though calling it ‘artificial’. This method helps, I believe, to put the animal in the centre of the design by allowing their choices to be the deciding factor enriching the animal. You mention within your work exploitation and enrichment; what is the difference, in your view, of exploitation to enrichment? Who decides the border between the two? The more work I do in ACI the more this question has been highlighted to me, that there appears to be a tension between gathering information for the study vs. the animals’ welfare.

 

 

horse
Conceptualised vision of HABIT

I agree with you that such a tension exists. But it only exists for us… Not for the animal. The animals knows whether environmental changes are appetitive / rewarding, or noxious/ aversive. We (as researchers) may be motivated to delude ourselves as to the benefit of our actions (!). Following on from my last answer, I think that there is a danger in allowing this border to be subjectively assessed by us as individuals. In our enthusiasm, we run the risk of developing ACI experiences for animals, just because we think that they are interesting. I think that the case for enrichment needs to be made (and proved) on a case-by-case basis. When we intervene in the animal’s environment with a new artefact, is this to meet a demonstrable ‘need’ or a ‘want’ initiated by the animal? Is there ethological evidence to justify this intervention and have we realistically assessed how our artefact will ‘fix’ things?

I think though that this tension exists for the animals though when they are deciding to do the study whether or not to give us what we want through compliance or do what they innately choose. I agree with you that it can be interesting to discuss if the product is really for the animal or because we (researchers) think this is interesting. In Sarah Ritvo work (2014), she found that despite making the technology the bonobo apes did not want it, as I have also found with mine. It is this drive to then filter down into the animal’s true beliefs that has led to be a biocentric approach to ACI. How did you come to apply the biocentric approach to ACI?

Dr Steve with Millie the horse
Dr Steve North with Millie the horse

On a personal level, it seems self-evident to me that ‘human exceptionalism’ is just plain WRONG. Human animals are just not categorically or essentially distinct from non-human animals. Increasingly, science is also starting to support this position. The erosion of the belief that consciousness is a purely human attribute is a big stride in this direction. In 2012 ‘The Cambridge Declaration on Consciousness’ stated that: “the weight of evidence indicates that humans are not unique in possessing the neurological substrates that generate consciousness. Nonhuman animals, including all mammals and birds, and many other creatures, including octopuses, also possess these neurological substrates” (Low et al., 2012).

I could not agree with you more; I have had my papers rejected by conferences who have told me that animals do not feel, I believe it was Darwin who first said that ‘there is no fundamental difference between man and animals in their ability to feel pleasure and pain, happiness and misery’. As you know, my work is primarily around dogs, as someone relatively new the area of horses I wondered how ‘The horse was a primary driver of human technological development.’?

Between 4,000 and 3,000 BCE, humans made a distinct transition from eating horses, to exploiting their abilities. Humans had to be able to imagine, design and then implement the technological artefacts required to harness ‘horse power’. As horse transportation then enabled the spatial disbursement of humankind, people encountered new challenges that also required technological solutions, through the same cycle of: imagine, design and implement.

I wonder if dogs go, or have been through this same cycle; in some way I believe they are with their unique ability’s still being ‘harnessed’. In your work you mention physiological ways to track a horse’s level of stress from heart rates to cortisol levels which helps humans understand when a horse is relaxed. I actually disagree that this being the way forward with monitoring animals and suggest a more triangular system, of behavioural interpretations through owner, the expert and   philological signs, which does not rely on approximations which may be biased.

blogFigure taken from my accepted paper to BHCI’2016 on a triangulation system for measuring behaviour from behavioural scientists observations, owner observations and physiological signals.

For example, the heart frequency can change for varying emotive reasons and cortisol can be for stress or arousal. How do you see your system safe guarding from these? How do you prevent a situation, like in Lawson et al., 2015 work where the human stops listening to their animal and instead goes off technology?

I would prefer stress monitoring to be non-invasive. Humans (at least anecdotally) suffer raised blood pressure as a result of ‘white coat syndrome’. How can we expect stress levels in animals not to be impacted when we start ‘doing things to them’? To some extent, this effect (heightened stress caused by ‘needle sticking’) may be offset through operant conditioning and habituation. Of course, this approach won’t work with animals unused to handling, or where habituation to humans is undesirable (say, in a captive breeding program). In equine research, there are methods such as measuring of eye temperature (from a short distance, using infra-red thermography) that have been demonstrated to accurately reflect arousal. I would agree that the reasons for observed physiological changes are sometimes multifactorial. That is why I have a preference for behavioural monitoring. Once an ethogram of behaviours has been refined through trials across a broad sample from a species population, it is fairly safe to link a specific behaviour to fear, as opposed to (say) excited curiosity. It is also possible to combine physiological and behavioural analysis, aiding clarification. My HABIT research partner Carol Hall has combined eye temperature and cortisol measurements with an ethogram, indicating behaviours relating to ease of horse handling (Yarnell et al. 2015).

Comic about talking horses
Comic about talking horses

I look forward to seeing such systems! Within your article you mention that ‘horses would be unable to provide feedback on its function, ease of us, and behavioural impact’. Contrary to this, I take the viewpoint that animals are constantly providing feedback through their use, in a similar way non-verbal users do through their body language. You touched on this yourself when you mention that ‘perhaps we need to observe and respond to non-verbal behaviors, as we currently do with spoken languages’. As in my doggy ladder of participation model, I believe that through empowering the animal through us changing the feedback method we can learn about their reactions and include them more into the design of the system making it more animal centric. It is in light of this viewpoint that I see it as the humans building bad technology and misunderstanding the animal’s own responses that prevents this feedback loop.

Yes, I agree with you totally on this. Animals are expressing their opinions continually. I meant that horses cannot provide feedback using linguistically orientated, speech-based patterns of language. When it comes to providing feedback on an introduced artefact, they are unable to participate in a semi-structured interview (!). Potentially this is very scary to HCI folk…

Hah, I laugh, but in dogs there has been people who have claimed to create systems to allow dogs to ‘talk’, so maybe this might be done for horses! I think the whole idea of ACI can be scary towards HCI, and I know of those who in fact do not see the link between HCI and ACI at all. As someone who has been involved in tracking animal’s behaviours before I wondered how do you plan on recognising horses within the video? Will this be live recognition of post-recording?

HABIT Project Horse Tagging Demo
HABIT Horse Tracking Demo

Now, that’s a great question, Ilyena 🙂 Unfortunately, it’s also a question that is as long as a piece of string. In HABIT we have always said that selecting our precise technological approach would form the first work package, in a fully funded research project. As the tech person on the project (along with Clara Mancini from the Open University), I have consistently resisted the temptation to rush into pursuing one approach, without doing a THOROUGH review. Computer Vision is a field where often a great deal is claimed. When it comes to delivery, it is only when you really unpick the underlying algorithms and potential limitations of a solution (…often in the small print…hidden away in a publication) that you truly understand its capabilities. However, I have recently been breaking my own rule and started to play around with some code and demos. I’m currently investigating a machine learning-based approach, using template based detectors to recognise movements on video that are characteristic of specific behaviours. This would allow some semantic connection between an ethogram and templates representing each behaviour. This works on the basis of generalised movements within a video clip. Rather than tracking individual skeletal structure (as with the Kinect), this approach simplifies a behaviour into how it might appear if viewed rapidly out of the corner of your eye. Some neurologists believe that this is analogous to how the human visual system recognises activities, as summarised patterns of movement spotted by groups of ‘trained’ neurons. This all sounds great, but I don’t have anything at a stage that is ready for public consumption. I am still testing out different neural net solutions and thinking about how to classify the results. Too many things to do…not enough time.

In the short term, the HABIT system would be applied to pre-recorded video. I would love this to work in real-time but that may not happen in the early stages. Really ‘hard-core’ computer vision development work is rarely about watching things happen in front of you. It tends to be demanding in terms of computational cycles. It’s more a case of leaving it overnight and coming back to see how five minutes of video have been processed!

It can indeed be how long is a piece of string! I found the more I delved into tracking systems the more a list of flaws and dependencies appeared. I found that in real time the amount of analysis that had to be done slowed down the system and you get better results pre-recorded; Yes it certainly is a case of leaving it overnight at times! Especially if you have a big corpus of learning material.

Replica of a horse painting from a cave in Lascaux
Replica of a horse painting from a cave in Lascaux

You mention measuring a horse leading a “natural” life. What would you see as natural? I think this might change between wild horses and domesticated ones, between countries and type of horse (show horse, pet horse, working horse etc.) but I am not a horse expert! How do you plan on tackling problems such as these?

By ‘natural’, I mean within the repertoire of the horse’s behaviours, as described in a well-established ethogram (such as McDonnell, 2003). Yes, there are differences in behaviour according to level of domestication, environment etc. There are also differences at the individual level, which should not be ignored. McDonnell describes the range of behaviours in wild horse herds and then describes stereotypical (abnormal) behaviours and other behaviours uniquely displayed in a domesticated environment. We plan on starting with the basic ethogram (or even a small subset of it), but refinement or customisation for specific environments / herds / individuals is always possible.

It would be lovely to see how this range of behaviours transcends into real life automated monitoring. I am very interested to see this project as it comes along, as it all sounds wonderful what you are doing: I admire your approach to ACI! I would like to thank-you so much for spending the time to talk to me about your work and I look forward to see future publications.

To find out more about HABIT work you can read the ACM Interactions Article here, their position paper on Horse Automated Behaviour Identification Tool – A Position Paper here or check out their website habithorse.co.uk.

References

Dawkins, M. S. (2004). Using behaviour to assess animal welfare. Animal Welfare- Patters bar then wheathampstead. 13, S3-S8.

Lawson, S., Kirman, B., Linehan, C., Feltwell, T., & Hopkins, L. (2015, April). Problematising upstream technology through speculative design: the case of quantified cats and dogs. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (pp. 2663-2672). ACM.

Low, P., Panksepp, J., Reiss, D., Edelman, D., Van Swinderen, B, Low, P. and Koch, C (in the presence of Stephen Hawking). The Cambridge Declaration on Consciousness. At The Francis Crick Memorial Conference on Consciousness in Human and non-Human Animals, Churchill College, University of Cambridge (July 7, 2012).

McDonnell, S.M. 2003. The Equid Ethogram: A Practical Field Guide to Horse Behavior (Eclipse Press). http://books.google.co.uk/books?id=-Mvm9NjH0WUC

Ritvo, S. E., & Allison, R. S. (2014, November). Challenges Related to Nonhuman Animal-Computer Interaction: Usability and’Liking’. InProceedings of the 2014 Workshops on Advances in Computer Entertainment Conference (p. 4). ACM.

Yarnell, K., Hall, C., Royle, C., and Walker, S.L. 2015. Domesticated Horses Differ in Their Behavioural and Physiological Responses to Isolated and Group Housing. Physiology & Behavior, 143 (2015), 51-57. DOI:http://dx.doi.org/ http://dx.doi.org/10.1016/j.physbeh.2015.02.040

 

Interview with Petcube

While Animal-Computer-Interaction is a relatively new field of study, following this growth of research, products have been developed for animals to entertain our beloved pets. This increase in pet products have resulted in devices in dogs, primarily to increase their welfare, by providing stimulus for them whilst they are home-alone. As researchers in ACI, it is important we are involved in, and take notice of, the industry side of animal technologies. While I have previously interviewed the CEOs of CleverPet, today I invite you to delve into finding out about another dog-technology: Petcube an interactive device for dogs and cats.

Petcube was a kickstart start-up raising $3.8 million to build a cube-like device that allows dog and cat owners to remotely interact via an app with their pet through video and voice communication and with lazer game interactions.  The product was originally thought of when the designer Alex Neskin pet chiwawa, Rocky, suffered from separation anxiety leading to the neighbours complaining about the noise caused from the dogs distress. This lead to Alex building prototypes of Petcube in Arduino to entertain and check up upon his dog. He then joined together with the other two developers of Petcube to iteratively create the overall product..

Untitled
Petcube Device and App

The final product allows the human user to view the cameras 180 degree range, talk and hear their pet, and use their finger to guide a laser which the pet can chase. Whilst this product has been available since 2013, it has recently gone under a few software updates which involve sound and motion detection with an auto-play mode, which plays randomised patterns of laser triggered randomly. The video that is recorded can be viewed and stored remotely, with cloud storage being a future addition. These videos then can be shared, in an online Petcube community. This community also allows the remote playing with other peoples pets and to view other peoples videos/images.

The feature I really liked about Petcube was there implementation of these devices into shelters to help increase adoption rates through exposure and increase the entertainment of the sheltered pets.

As a designer of dog-interfaces I had a few questions for the company around how the design and implementation of the product:

How is the product pet focused? The company makes the product pet focused through iterative design with the owner & pet. They claim to pro-actively reach out to the owners and as the system is primarily software based they do updates through these systematic reviews. This enables a feedback loop to be built. The methods they use to increase feedback from the owner about Petcubes use is through surveys and research.

Petcube App viewing a dog

How did you test the product and come up with its design? As mentioned above, the product is tested through iterative design in what appears to be an agile methodology. Testing a product like this can be troublesome as you have effectively two users: the dog and the owner. Whilst the owner is the one buying the product, the dog also has the choice of using the technology which ultimately is why the product is baught, to aid the human-dog relationship. It would be interesting to know more about the method they choose, or built, to test the products with dogs to see the validity of their results and how they correlate with current methods of designing with dogs. Aesthetically, the product is designed to look sleek and appeal to humans, and is provided with a tripod to enable a higher degree of vision.

How does the advance motion sensing work? How is this Automatic? There has been various forms of tracking animals, such as through facial recognition , body posture analysis and eye tracking. With this new update I was interested in how consumer products recognised animals. Petcube uses the webcams to pick up image changes as a basic form of motion detection to send push notifications to the user. This method however would send push notifications through any movement and not specially the pet, as with the above methods.

Untitled1
Petcube Device showing lazer and camera

How does this enrich animals lives? This product claims to enrich the animals lives through providing dogs and cats with basic entertainment (laser) and allowing them to communicate with their owners and vice versa.

How do you ensure no negative effects or do you provide guidelines for the owners? When working with animals their is always the aspect of ensuring animals welfare, especially with a remote system. To help aid the owners in using the system with their pets, Petcube does provide a userguide which produces health warnings about the prolonged use of a laser system but does not give guidance on the systems use. Petcube instead put the earnest on the dog-owner, as there system is highly customisable the responsibility of its use is down to the owners own use.

What future updates are you thinking of implementing for this system? Petcube are currently looking into further sensory and haptic devices which they can integrate into their current system.  Such as an interactive ball (think walkytalky ball) which moves on its own accord.

THOUGHTS ABOUT PETCUBE

This conversation with Petcube lead me to an interesting conversation about the current systems in place in ACI, such a Playing with Pigs which has similar concept as Petcube but with pigs. They used an interactive wall, but once again remote interaction through devices (tablets/phones). There work however was primarily based upon tactile interaction and unlike Petcube, the animal had higher automatic interactivity within the system with visual feedback (when the ball of light is touched on the wall it sparks). In this way I believe that Petcube can be improved through involving the animal as a more prominent user in the system. This can be done by increasing its interactivity so that even when the owner is busy the animal can still use the system. This is where there new update on auto-play slots in, however is currently unable to recognise the animals features to enable more advanced game play.

Playing with Pigs Interactive Wall – ww.playingwithpigs.nl/

Whilst their system does not have touch interfaces, although this could be added through the future implementations such as the external devices mentioned, dogs and cats primarily use body language as a form of communication. This is why I became so excited at the prospect of automated detection being used in a commercial system as this allowed for further automation but also autonomous play. However, I was disappointed to find this was only generic movement alone which could not only cause false positives, but also not identify an animal from a human or the meaning behind the animals movement. The current technology system, high specification camera, could use basic methods that I have made through image recognition to see where the dog/cat was looking and thus respond with the laser in an automatic way. This would allow the pets communication with the system (laser) to be understood and increase the interactivity and a feedback loop from technology to pet. This in tern would allow the system to be used more, bringing more satisfaction to the owner through the enjoyment of the pet, satisfying both user groups.

I also question the cognitive benefits of cats/dogs playing with lazers and whilst this may provide stimulation, I am unsure if this would solve separation anxiety other than help temporarily persuade off boredom. Hopefully through creating a more interactive system with their updates this could provide the dog/cat with further cognitive challenges rather than relying on a chase instinct.

Dog chasing a laser with CleverPet

Lastly I am faced with the similar quandary that myself and Patricia felt with Clever Pet: where does the responsibility lie in the use of animal-computer systems? Whilst I think that it is up to the owners on their own use of this system, I believe there should be stronger guidelines warning about the continual use of such a system in place of real human-animal interaction and the confusion factors which can occur when dogs hear their owners voice remotely, especially with anxious pets who are suffering cognately.

Overall, whilst I enjoy learning about these products and how they are used, I believe that the ACI community needs to work more closely with industry to help create better publicly available technology that benefits the animal as equally as the human.