Built to Play Interview!

Hopkins Duffield were recently interviewed about our project Laser Equipped Annihilation Protocol by Daniel Rosen and Arman Aghbali from the online magazine and radio show Built To Play. Built to Play is a show about games, tech culture, and interactive arts and an online magazine that collects insight and analysis on video game history, art, and the ways we play. It gives in-depth analysis of games new, old, and not-yet-released. Built To Play broadcasts on Scope 1280 AM radio station in Toronto at Monday 1 p.m EST.

We were featured alongside game designers Kieran Nolan, Sagan Yee, Alicia Contestabile, and Nadine Lessio in Built To Play’s feature on the artists involved in the Dames Making Games Killer Interface Jam, and Vector Games Art and New Media Festival 2015.

Check out the podcast Built to Play 55: Interface the Machine for the full interview!

Laser Equipped Annihilation Protocol Logo

 

 

InnerSpace Television Broadcast on Vector Festival & Post Vector Wrap Up!

Vector and the Dames Making Games Feb Fatale Fundraiser: Killer Interfaces at Bento Miso was a blast! We would like to thank everyone who came out to support the Dames initiative to send female game developers to GDC 2015, and all those who masochistically attempted to beat Laser Equipped Annihilation Protocol! Our high score went to Alicia Marie, who after much determination managed to beat the game in 5.68 seconds! You can see her run here.

Additionally, we were mentioned in InnerSpace’s interview of Martin Zeilinger for Vector Game Art & New Media Festival 2015! This interview was aired across the country on Space channel on February 20, 2015 at 6 PM. The interview also had repeat airings on MTV on Monday February 23, 2015 at 6 AM, 9 AM, 12 PM and 3 PM!

More documentation of the project is coming soon, and we will be blogging about our process of creating Laser Equipped Annihilation Protocol.

Documentation of The LEAP Engine!

Just in time for the Vector Festival of Games and New Media Art / Dames Making Games  “Killer Interfaces! Party + Fundraiser” on Feb 20th, at Bento Miso of which The L.E.A.P. Engine will be a part of! – See here for Killer Interfaces Facebook Event Page! Here is some documentation of The L.E.A.P. Engine from the Hamilton Supercrawl 2014, at The Factory Media Centre. We have tonnes of work in progress documentation and we’re going to be blogging about the our process of creating L.E.A.P. within the Site 3 CoLaboratory Artist Residency Program in the near future!

Laser Equipped Annihilation Protocol (The L.E.A.P. Engine) is a live-action gaming environment that explores the personality of a snarky and mysterious game sentience who has infected a room with technological systems that challenge players and collect data. With a limited amount of time, the player must pass through a complicated series of changing and alternating laser patterns without tripping any of the lasers in order to deactivate the system and win the game. If the player trips a laser or if the timer runs out, it’s game over.

Also, we would like to toss a major thank you out to the following people for making this project possible in the first place: Site 3 CoLaboratory, Christopher Thomas (Technical Consultant), The Ontario Arts Council, Interaccess Electronic Media Arts Centre, Dann Hines, Active Surplus, Creatron, Seth Hardy, Kate Murphy, Marc Reeve-Newson, Alex Leitch, Jason McDonald, Michael Awad, Terry Anastasiadis, John Duffield, Christine Duffield, Mike Duffield, Barbara Hopkins, John Scarpino.

 Ontario Arts Council Logo

LEAP Engine @ Killer Interfaces! Party + Fundraiser

Vector Festival of Games and New Media Art is teaming up with Dames Making Games (of which Daniele is a proud member) to present the Killer Interfaces Party! We’re excited to announce that The L.E.A.P Engine, our live-action laser game, is going to be at Bento Miso on February 20th for this event!

Hopkins Duffield L.E.A.P. Engine logoFriday, February 20, 2015, 9:00pm – Saturday, February 21, 2015, 2:00am
@ Bento Miso Collaborative Workspace, 862 Richmond Street West

Facebook Event Page

This party, which is also a part of Vector Festival of Games and New Media Art, is set to show off some of the fun and innovative games created at Dames Making Games’  Feb Fatale 3: KILLERINTERFACES. Be the first to play! The event is also a fundraiser to support Dames traveling to the Game Developers Conference! Plus lasers!

Grab craft beer by the can, chow on local grub and shoot for some raffle prizes, while playing some cool games and checking out our ominous laser trap!

 Vector attendees! Are you coming straight from the Interface Experiments in New Media and Game Art panel at Interaccess? Make sure to grab a (free) DMG chip on your way out. Trade your chip in at the party to receive $1 off any food or drink item.

Entry is FREE but you can add a donation to your advance tickets (or at the door). All-ages (must be 19 to drink).

 —————————————-

ALSO!

 On February 21st, we will be giving the public a behind the scenes tour of The LEAP Engine, as guest speakers at Dames Making Games’ February Speakers Social.

GUEST SPEAKERS @ DAMES MAKING GAMES SPEAKERS SOCIAL

Saturday, February 21, 2015, 5:00pm – 7:00pm
@ Bento Miso Collaborative Workspace, 862 Richmond Street West

Facebook Event Page

Text Tone at Hamilton Winterfest 2015

We are pleased to announce our new interactive project, Text Tone,
as part of the Hamilton Winterfest 2015 Kick-off Event curated by Tara Bursey entitled On The Waterfront! Text Tone is on display on February 7th, 2015, at Pier 8 in Hamilton! Come out and enjoy projects and events by a bunch of talented artists and coordinators!

Facebook Event Page

Text Tone

Text Tone

Text Tone is an interactive mobile phone based sound installation that decodes text messages sent by the audience into touch-tone keypad audio compositions. In 1878 in Hamilton, Ontario, Hugh Cossart Baker, Jr. established the first commercial telephone exchange, making Hamilton the first location in the British Empire to have a publicly accessible telephone network. Prior to this, telephone networks commonly consisted of direct lines from one specific unit to another, and this development therefore allowed the populace to use a device to contact more than one person for the first time. Despite this increase in function and accessibility, the luxury of the telephone was primarily for wealthy households and businesses, up until the 1920’s. Yet now, these technologies have evolved to become widely accessible for many individuals, and much of our personal behaviour, lifestyle, business practice, and overall relationship with information is mediated through instant electronic communication.

From rotaries to push-button touch-tone keypads, to touch-screen smartphones with keyboards, these interfaces have undergone many changes over the years. Due to the phone’s origins as a speaker and microphone device, the touch-tone keypad‘s primary function was almost exclusively to input phone numbers for voice calling. As the phone has evolved into a form of mobile computing, the incorporation of a keyboard has become essential, and components of older forms of this technology have lost their necessity, such as the sounds of numbers being pressed while creating letters during the spelling of words or texting. In commemoration of the evolution of the function and accessibility of these technologies throughout the years, Text Tone invites its audience to participate in examining how once-familiar communication behaviours are becoming lost languages to be unlearned through obsolescence. Text Tone explores how obsolescence is created by the needs of our evolving communication habits, and correspondingly, how advances in these technologies have the power to influence our habits all on their own.

About On the WaterFront

To be on the waterfront is to be on the threshold of something. The waterfront is where settlers landed, and early trade took place. In the 19th Century, the area surrounding Pier 8 was home to some of the city’s first industrial sites, including an iron works, boat works, sail loft and glass company.

In On The Waterfront, local industrial sites and history serve as points of departure for contemporary artists from around the region. Evocative outdoor installations will draw on skills, materials and forms associated with early industry as well as the social history of the North End neighborhood. This exhibition will consider Hamilton’s waterfront as a site of historical significance, tension and possibility, as well as a place where past stories and dreams of the future collide.

Featuring Work By:

Lesley Loksi Chan
Hopkins Duffield
Carey Jernigan and Julia Campbell-Such
Fwee Twade (Becky Katz and Matt McInnes)
C. Wells

Callout: We Want Your Old Phones!

We’re looking for old or obsolete telecommunication tools for our upcoming project Text Tone at Hamilton Winterfest 2015 (more details about this real soon!). As part of the piece, we’re working to create a scrappy reactive techno-tombstone. If you’re in the Toronto or Hamilton area and have phone technology that you’re willing to donate, contact us! This means old phones, new phones, broken phones, working phones, land line phones, cell phones, phone chords, answering machines, pagers, beepers, rotary phones, anything you’re willing to donate to our monolith of obsolescence!

Taking apart a rotary phone!

Taking apart a rotary phone!


Text Tone is an interactive mobile phone based sound installation that decodes text messages sent by the audience into touch-tone keypad audio compositions. In 1878 in Hamilton, Ontario, Hugh Cossart Baker, Jr. established the first commercial telephone exchange, making Hamilton the first location in the British Empire to have a publicly accessible telephone network. In commemoration of the evolution of these technologies, Text Tone invites its audience to participate in examining how once-familiar communication behaviours are becoming lost languages, unlearned through obsolescence. Text Tone explores how obsolescence is created by the needs of evolving communication habits, and correspondingly, how advances in these technologies have the power to influence our habits all on their own.

More info on the project soon!

Hive 2.0 Process – Ultrasonic Sensors and Design Challenges!

Our goal with the second iteration of Hive 2.0, which was exhibited at New Adventures In Sound Art recently, was to make the piece more dynamic and interesting by having it respond to users in the room. With our projects, we try to get the best result that we can given our time and budget constraints, and as a result, we often designate a substantial portion of our budget into material research, and do rigorous testing to ensure that our work is relatively robust and responsive. As with any sensing means, each method has its pros and cons, and compromises often have to be made. It becomes about determining which compromises are best suited to the specific situation or setup.  We thought we’d share our process and research with others in hopes that it helps anyone who may want to use similar tools!

The Kinect Approach – We did not use this

Originally our plan was to mount an Xbox Kinect onto the ceiling in order to track the audience within the space. For those of you who do not know, the Kinect essentially contains a depth camera that uses an infrared laser projector to throw IR light out into the room, and then measure the distance to each point of IR light contact with an object or person. This gives it the capability to sense and capture an environment as a three-dimensional (x, y, z, coordinates) representation. This differs from a traditional video camera that depends on sensing environmental lighting conditions in the visible light spectrum, which results in a two-dimensional (x, y, coordinates) image. The concept of mapping a scene as a matrix of three-dimensional coordinates is also known as a depth map. In comparison to the traditional video camera, the addition of a third dimension allows for the ease of separating the spatial and material boundaries between objects, architecture, and most importantly, distinguishing between users and their surrounding environment.

In combination with Max 6’s cv.jit library, we would be able to detect users in the space by essentially setting the camera’s threshold to ignore the Hive sculpture, and to detect a distance range that’s about the height of the average person’s waist when standing in the space. Kyle had prior experience working with the Kinect with Max 6 in his project, Trace, so this seemed like an affordable ($100 for the sensor back in the day when it was newer) and familiar option. Also, by being ceiling mounted and viewing the space from above, the Kinect would help us avoid dead-zone issues like those we found with the Ultrasonic sensors (we’ll elaborate further on). Additional benefits were that we could uniquely identify and track users in the space, opening up some interesting sound design and interaction options. However, we ran into multiple design issues in the space with this approach.

Issue 1 – Sculpture Movement for Events:

One issue is that within NAISA’s space, we needed to move the sculpture to accommodate musical performances on weekends. As a result, we had to periodically hoist the sculpture up to the ceiling so that it was out of the way. This caused problems with our intention to use the Kinect for tracking. With the ceiling mounted Kinect, while having it watch for new things that enter our designated tracking region (i.e., a user), we could ignore the three dimensional area in the received image where the sculpture and any other stationary objects within the room would be present. We can think of this like virtually masking out regions of the image. However, on top of this masking, the camera would not know where the speaker channels are located on the sculpture (and this information is necessary for programming the speakers to react in specific regions according to a person’s proximity). So we would have to manually set these regions in relation to the spatial information (w/ masks) received from the Kinect. This method would be doable if the sculpture were remaining static, however, because the sculpture was being moved weekly, it would have been very difficult to reliably calibrate the user’s proximity data to each speaker channel, because the sculpture may have been in an entirely different place each time it was temporarily moved.

Hive in 'Stealth Mode' for performances at NAISA

Hive 2.0 in ‘Stealth Mode’!

 

Issue 2 – cv.jit Reliability

In general, using cv.jit’s blob detection algorithms were VERY CPU intensive and not particularly reliable for tracking. Some tweaking could have been done to smooth things out, but there were still more issues with this approach!

Issue 3 – Height of Ceiling:

The Kinect would have been ceiling mounted and the center of the sculpture would have corresponded to the center of the camera frame. After taking into account the circumference of the sculpture, and given the field of view of the camera (i.e., how wide of an area the Kinect camera can capture), we could only capture an area of about two-three feet on each side of the sculpture. Not very exciting. We entertained using two Kinects and stitching the images together to expand the field of view. We also considered using three Kinects mounted on the wall to create an accurate capturing of the space, but between price ($400+), CPU performance, task complexity, and the other aforementioned problems, it was seeming like the Kinect wasn’t the tool for this project, and especially not in this space.

 

The Proximity Sensor Approach

Our solution was to find proximity sensors and mount them on each channel. This way, each speaker channel would be able to read how close ‘something’ was to it. As with many of our projects, we needed quite a few of these sensors to get the desired result, and of course costs grow exponentially depending on the amount of units needed.

There are two types of proximity sensors we tested. These were chosen because they fit into our constraints of accessibility, time, and budget.

Hive 2.0 Sensor Diagram

Hive 2.0 Sensor Diagram

Infrared (IR) Proximity Sensors – We did not use these

IR proximity sensors would be like a simple version of the Kinect, in that the sensor shoots out an infrared beam and detects how far along something is in its path. The Kinect is essentially a 640 X 480 array of these units with longer ranges (as they are IR lasers).  These simpler sensors cost about $10, so they could have been an affordable (six channels = $60) solution. The issue with the IR approach is that the accuracy of the readings varies depending on the density of the materials they interact with (i.e., cloth is more absorbent than a hard surface, thus reflecting the light differently, and thus influencing the consistency of the data coming in). Also, the data varies depending on ambient infrared light in the room, which could lead to inconsistent readings as more people enter, or as lighting conditions change. The sensors that were in the lower price range also did not have a long nor wide capturing range. Just like the Kinect, this wasn’t our solution.

 

Ultrasonic Proximity Sensors

Ultrasonic sensors work just like SONAR. The sensor emits a high frequency pulse and calculates the distance of an object/user by detecting the amount of time the sound takes to reflect back to the sensor.

 

Issues With Ultrasonic Sensors

There were still some issues with this approach in the case of Hive 2.0. One issue is that multiple ultrasonic sensors can interfere with each other, given that you have sound waves bouncing around in a space, and this can result in unstable data. Our solution was essentially to ‘strobe’ between the sensors so that only one was on at a time. In other words, if you have six sensors, turn all of the sensors inactive. Sensor 1 sends out and receives it’s pulse. Sensor 1 rests.  Sensor 2 sends out and receives it’s pulse. Sensor 2 rests. Etc. Etc.

The issue with this approach is that you lower your time resolution. For example, if you have a sensor that reads data at 60 Hz (or think of it as 60 times per second), divide this by the amount of sensors (i.e., 60 Hz / 6 sensors) means that each sensor gets a ‘frame’ of data every 100 milliseconds (ms). You may not know it if you’re not used to computers, but that’s actually a lot of time lost, and this results in the user perceiving latency! For example, using video as an analogy, a 30p video is 30 frames per second, so you see an image every 33.33 milliseconds to create continuous motion.  A ‘frame’ every 100 ms results in a frame rate of 10 frames per second! Not the best, and kind of choppy!

Another issue is that when getting close to the sculpture, there can be dead zones between the sensors, as they are mounted on the exterior of the sculpture. If you compare the Kinect floor tracking diagram to the Ultrasonic floor diagram, you’ll notice that the Kinect had no dead zones. However, despite these issues, ultrasonic sensors ended up being our most appropriate solution. Now let’s get into which ones we chose to use, and why.

 

What We Used

We tried two main brands of Ultrasonic sensors, both accessible at our local sensor go-to shop, Creatron. One was the Elec Freaks HC-SR04, which was a $9 option and would have cost about $60 for the project. The second was the MaxSonar LV EZ-0 which cost about $36 per unit ($216 for the project). After our tests, we concluded that the EZ-0 was worth the extra cash to get the best experience that we could achieve given our time/money limitations. But let’s go through a breakdown of why we made this choice:

MaxSonar EZ-0 and HC-SR04

EZ-0 VS HC-SR04!

 

HC-SR04

HC-SR04 Datasheet

Summary:

Cost: $9

Working Voltage: DC 5V

Working Current: 15 mA

Reading Rate (approx): 16 – 17 Hz

Pins: 2 Digital, One Input, One PWM

Physical Dimensions (approx): 20 mm X 45 mm X 14 mm ( 13/16″ X 1 3/4″X 1 3/4″)

Range:

Officially: 2 cm –  4 m (3/4″ – 156″)

Our Test Results (approx): 2.5 cm – 1.21 m (1″ – 48′)

Beam Width (approx): 15°

We tested this HC-SR04 sensor in conjunction with the Arduino ping library. The sensor wasn’t bad, but had two major drawbacks. One was the sensor had a 15 degree beam, which was not as wide as the EZ-0’s beam, resulting in less of an area being tracked. Another issue is that the sensor ran at 40 Hz, which was half the rate of EZ-0, and would result in some nasty latency when six of these were racked up in our setup. In addition, the wiring also would have been slightly more complicated, as each sensor required the use of two digital pins – one to send the pulse, and one to receive it. 2 X 6 = 12 digital pins being used on our Arduino Uno! Yes we could have used a shift register or multiplexer, but we haven’t used those yet, and being very limited on time, we didn’t want to have any additional headaches with unknown variables.  Additionally, this sensor was larger than the EZ-0, which made our mounting options for it potentially slightly more trick or labour intensive. The final blow was that the sensor’s reading distance was less than advertised, and only seemed to reliably read 3-4 feet, making it have little advantage over the Kinect approach. Now for the EZ-0.

MaxSonar LV – EZ-0

LV EZ-0 Product Page

LV EZ-0 Datasheet

Summary:

Cost: $36

Working Voltage: 2.5-5.5V

Working Current: 2 mA

Reading Rate: 20 Hz

Physical Dimensions: 22 mm X 20 mm X 15 mm (7/8″ X 13/16″ X 9/16″)

Pins (for our use): 1 Analog Input

Range:

Official: 15.2 cm – 645 cm (6′ – 254″)

Our Test Results (approx): 76.2 cm – 3.048m (3″ – 120″) **Note we did not have the proper space to test the maximum advertised range, however we found it the most responsive within this range

Beam Width (Approx): 70°

Although it was about four times the cost per unit as the HC-SR04, we chose to use this sensor with Max 6 in conjunction with our custom rewrite of Lasse Vestergaard’s and Rasmus Lunding’s ArduinoInOutForDummies library. We had numerous reasons for choosing the LV EZ-0 over the HC-SR04. First off, it was slightly faster, and given that the specs of either sensor’s reading rate weren’t totally ideal to begin with (20 Hz = a reading every 300 ms), we needed as fast as we could afford. Also, the EZ-0 consumed less current (not a major concern in this scenario, but if we ever get the budget to add more sensors, this would really add up, although given the aforementioned reading rate, we would be reluctant to add any more of either of these models!). Using the analog readings from the sensors, it was both easier to set up, and required less Arduino pins (6 analog pins vs 12 digital pins). The sensor also had MUCH better documentation, including diagrams on how to use multiple sensors in conjunction without any shift registers or multiplexers! As a side note, we ended up using the ‘Sequential’ method versus the ‘Continuous Looping’ method, as the latter didn’t work as expected and appeared to operate no differently than the former. This may be something to do with how Max deals with the Arduino though.

From an aesthetic standpoint, this sensor was physically smaller, and easier to design mounts for. However, the main reason we chose the EZ-0 was that its range was significantly wider ( 70° vs 15° ) and longer (10 ft vs 4 ft) than the HC-SR04. Given that dead zones are an admitted design flaw with the Ultrasonic sensor approach, we needed a beam as wide as possible. Additionally, EZ-0 just seemed a bit more responsive and dynamic in general, so we opted to pay a bit more to give people the best experience we could provide.

 

Long Shot of Hive 2.0 Sculpture

Hive 2.0 Sculpture

So there you have it! If any of you out there have any questions, we’d be happy to provide more insight into our process! Do you have any other tools to recommend to us, or have a better product(s) you’d like to sponsor us with? Contact us!

 

 

 

Follow

Get every new post delivered to your Inbox.