I’ve been game jamming for 5 or so years. It’s a hobby I really enjoy. This was some off the cuff advise to folks new to the Global Game Jam which will be happening again this coming January.
Read the rest of this entry »

I saw a question of Design News this morning that made me think a bit, the question of “what makes something a product”. in I wrote this pedantic little diatribe in response that I though worth sharing:

A product is an artifact that is produced with a commercial intent by it’s creator for an audience who’s intent is the consumption of that artifact. The qualities of the artifact are less important than the context in which it was produced. Hence anything on the above list *could* be a product, but could also be produced in such a manner as to no longer be so. An example is useful.

Read the rest of this entry »

The following post was originally posted on Gamasutra.com as Jumping Head First Into Motion Control Design.

Leap Motion Engineer and UX Researcher, Daniel Plemmons, shares common challenges and pitfalls facing creators of motion controlled games; and offers insights as to how to avoid and mitigate them to create magical motion controlled experiences.

Input is a fundamental part of the gaming experience, and we live in exciting times. Every week, it seems there’s a cool new natural interface startup or Kickstarter, and our choices about interaction aesthetics now begin with the crucial choice of input hardware. We live in the age of the Oculus Rift, the Myo, the Leap Motion Controller, Project Morpheus, the Kinect, PS-Move, the WiiMote, Sifteo Cubes, Omni-Treadmills, webcam computer vision, accelerometer and gyroscope-enabled mobile phones, and the Razor Hydra. Not to mention the scores of custom installations like the ones we saw at this year’s alt.ctrl.gdc show.

The Leap Motion Controller at work. Detailed real-time hand tracking. The Leap Motion Controller at work. Detailed, real-time hand tracking.

The qualities and limitations of any individual input device shape the mechanics we design and the psychology of our players. The resulting interaction aesthetics have a significant impact on our play experiences. Historically, there has been a relatively limited number of reasonable input device options. For most developers, the toolbox included a keyboard and mouse, a gamepad, and maybe a joystick or racing wheel. There’s been no lack of variety created with these traditional input devices, but the recent explosion of new input and output technologies has opened up a lot of new design opportunities and challenges.

One of the essential features of motion control is its power to change the player’s mindset. When our bodies are involved in the play experience, we become part of the game, along with the space around us. How this manifests depends on the interaction aesthetics you choose to build. When I play Johann Sebastian Joust with a set of Playstation Move controllers, one of the coolest moments is when someone finds a creative way to use the world or the objects around the playspace to their momentary advantage. When I play Dance Central on my Kinect, I feel a sense of presence in my experience and am energized by the full-body interactions of the game. There’s also a performative aspect to the game – in my case, mostly making myself look very silly. When I play Dropchord with my Leap Motion Controller, I’m focused on the minutiae of how I move, aware of each motion and the paths my hands take to reach each location, lending the experience a sense of flow.

Die Gute Fabrik’s JS Joust engages players’ bodies and the playspace

With all these new options, it’s easy to get excited about motion controls. It’s even easier to be blinded by their novelty and miss out on creating a great experience. If you’d like to take the plunge into motion control design, there are some common pitfalls, that if you work to avoid, your life will be far easier. In return for new opportunities for motion control, we cast off 40+ years of games and interfaces designed for keyboards and buttons and mice. Discrete, binary input systems. Designers working with NUI input devices must work in a very different headspace.

Let’s Motion Control All the Things

I often meet enthusiastic developers who’d like to add motion control to just about everything. I love and share their passion, but it’s a double-edged sword. Not every concept is a good candidate for motion control. Designers must consider the mechanics and interaction aesthetics they’re looking to create, and weigh them against the strengths and weaknesses of their chosen input device.

Motion Control All the Things

If you’re selecting between in-air motion controls or a handheld device, consider the following. Players have a very high expectation of binary input. If a discrete action only works 90–95% of the time, players are going to be very frustrated. Devices like the Playstation Move and Razor Hydra solve this problem by placing physical buttons on the controllers you hold in your hands. Developers using in-air controllers like the Leap Motion Controller or Kinect don’t have that luxury. Instead, they trade binary interactions for many more dimensions of rich analog data, allowing them to map gameplay and feedback to a wide variety of variables – like the relative angles of joints, the directions of individual fingers, and the rotation and velocity of the user’s palm. This flexibility lends itself to creating rich interactions that would otherwise be impossible. Consider these tradeoffs carefully when selecting your inputs and your mechanics; and consider them a guide when designing the rest of your game experience.

I don't always build with motion controls

Ignoring Menus and UI

In a similar vein, bad UI can quickly hamstring a title. At Leap Motion, many of our own early internal prototypes and applications struggled with this.

Touchless enables some very cool interactions, but suffers from an underwhelming UI.

Mechanics we’re used to supporting with the myriad of buttons, analog sticks, and knobs at our disposal are suddenly difficult to map onto the organic analog data that motion control presents. When considering motion controls, make sure to understand how players will interact with game options and menus, how they’ll pause your game, and how they’ll switch weapons or tools. It’s important to not overly pack the input space, especially with in-air motion controllers where gesture detection can be varied and fuzzy. As gestures and inputs become increasingly similar, the odds that a player will be confused, unable to remember an input, or for the game to register false positive and false negative responses increase. Good design will stem from deliberate planning and mapping of controls. A game’s menus don’t always have to be mapped to the primary motion controls either.

Multi-Modal Input

Many of the UI and mechanic challenges described above can be alleviated by smart use of multi-modal input – multiple input devices creating a single experience. By mixing and matching input devices, designers can negate many of the limitations of each individual device. For instance, Double Fine Productions’ Leap Motion title Autonomous uses traditional WASD controls for moving the player, while using gestural input for look, weapons, and other ‘analog’ actions.

The challenge with multimodal input is in the transition between modalities. If a player’s hands are busy interacting in the air, it may be frustrating to lower them to a keyboard. By the same token, the game must not register this lowering of the hand as its own input. Autonomous solves this by dedicating each hand to a single input device, but this is certainly not the only design pattern available.

Mechanics Creep

Once you’ve settled on a concept and input device, the core input needs to play well before additional art and mechanics are layered on. Too many motion control games get mired in production before finding the fun in their input. A game’s design and assets often become solidified too early and the changes needed to make their motion controls work aren’t realistic later in production. Plan for extensive prototyping time early in development. As a game’s input changes, the mechanics, visual feedback, or even a game’s entire visual style will need to adapt.

The core flight mechanics of Verticus lent themselves well to motion control.

This also highlights the dangers of porting a game to a motion-controlled platform. Unless a game strongly lends itself mechanically to motion controls, porting it will be incredibly difficult. Even games that are good fits for motion control often have to significantly redesign their UIs and menus to support a good motion control experience. When embarking on a port, be prepared to rebuild a lot of UI to support your chosen input device.

User Training

Motion control games will be played by many players who have never used the particular input device before and may not have a clear idea what it does or how it works. In addition to training users on how to use the game, a small part of the experience should be dedicated to familiarizing players with the new input device.

Patrick Hackett (@playmorevgames) and Drew Skillman (@dskillsaw) from Double Fine describe this in theirGamasutra article all about developing their first Leap Motion Controller title, Dropchord:

“…as new technology emerges and we create the initial wave of applications, it’s important to clear the cache and re-think how the first consumers are going to approach the product.”

“To familiarize players with the ideal locations of their hands, the initial screen requires the player to line up and hold their fingers over spinning circles.  When done correctly, there is audible and visual feedback and the game beam slowly forms.”

Dropchord’s start screen focuses on teaching how to use the input device

Dropchord uses its start screen, complete with dynamic audio and visual feedback, to teach players how to use the device. The message is clear and simple for new players, while expert players can move past the screen quickly and are treated to beautiful audio and visual effects along the way.

Digital Feedback in an Analog World

Designing for traditional input devices, we’re used to binary states: hovering or not; touching or not; mouseDown, or not. With motion control platforms, the experience is defined less by individual states, and more by transitions between those states. To account for this, designers must reconsider the structure of their visual and auditory feedback. Just as our controls use motion, so must our feedback. I’ve found myself referring to this as “dynamic feedback,” but I’ve also heard “motive feedback” or “analog feedback.”

As the player moves their body in the scene, the application should constantly respond to their motions; communicating what the interface cares about at any one time. This is in contrast to most traditional desktop and mobile design, where the interface only changes when the user directly interacts with the game. The nearest design analog on desktop is hover effects on buttons. It may help to think of dynamic feedback as “super hover.”

The menus in Leap Motion’s application Freeform use constant dynamic feedback to aid usability.

In our prototyping and user research at Leap Motion, we’ve found the addition of bold, clear dynamic feedback drastically improves users’ experiences. While developing one of the early Leap Motion applications,Freeform, our design team ran through a rigorous process of prototyping and iteration to develop the UI interactions for the app. In doing so, we developed a set of very successful design patterns and resources for the wider development community. You can find a more detailed discussion of our process and dynamic feedback in Freeform’s UI design in our post on the Leap Motion blog. 

Foundational Feedback and Your Senses

While motion controls allow for a high degree of freedom and nuance, they lack many of the traditional signals and feedback we’re used to from our hardware input devices. For example, let’s compare and contrast the physical and mental processes that take place selecting a button on a web page, with a mouse and with the Leap Motion Controller.

The mouse version

(1) First, you put your hand on the mouse. You can feel it and you know the mouse can “detect” your input (tactile feedback). You’ve declared your intent to interact with the computer. You move the mouse along the table. It takes a moment for you to find your cursor on the screen, but as long as that cursor moves when you move the mouse (visual feedback), you know the mouse is working.

 Touch, sight, and proprioception all combine to let you move your mouse pointer quickly and easily.

(2) You move your cursor towards the button. The feeling of resistance from the table and your sense ofproprioception (where the parts of your body lie in relation to each other) tell you how far you’ve moved your arm. The cursor on the screen simply confirms your expectations. As the cursor nears your target, your eyes focus on it, letting you correct your exact position. You’re not thinking about it, but you’re constantly making tiny corrections as you move.

The various feedback vectors available per platform

(3) Your cursor crosses the boundary of the button, and it highlights (visual feedback).

(4) Your index finger presses down on the left mouse button (or left side of the mouse if you’re on a Mac). You feel the resistance of the button and then the reassuring pop as you exert enough force to depress it(tactile feedback). You also hear the ubiquitous “click” sound we’re all used to (hardware auditory feedback). You’ve used this so much you know this means the computer has registered your input. On the screen, the button confirms your input by changing color and/or shading.

Additional visual feedback communicates the system state.

(5) Within milliseconds, your finger releases its pressure on the button, you feel another “pop,” and you hear the second half of the anticipated “click.” The main content area of the webpage flashes white, the button you just pressed transitions from a light background to a dark one, and a small spinner appears next to the name of the browser tab. All this confirms that your input was registered by the website, and it is in fact navigating.

We experience this loop thousands of times per day as we “pick and click” our way through modern desktop interfaces. It takes a 10th of a second, but each piece of feedback is key to the efficient use of the mouse. When a piece of feedback returns an unexpected result, it tells us immediately what’s wrong. Is your cursor not moving? Your mouse must be disconnected, or the computer is locked up. Didn’t feel the button press? You’ve got a broken mouse. Did the button not highlight? It’s probably disabled.

Notice how much of this loop is tactile and auditory. When you’re designing for motion control, your interface must make up for these missing links in the feedback chain. We’re subconsciously aware of a lot of information about the state of our hardware, and the application we control with it. If we’re denied this by a lack of foundational feedback, we conclude an interface is unresponsive, dodgy, confusing, or broken.

The motion control version

Now let’s take the common motion control version of these events – moving to an item and selecting it. Many applications today like Photo ExploreTouchless, and Verticus use an in-air “screen tap” gesture for selection. They use a cursor with dynamic feedback to show the user when they’ve made a “click”. As you read this, it’s worth noting that between touch, sight, and hearing, sight is the slowest responding of our senses.

(1) You start by placing your hand in the area you expect the sensor to detect you and point with your index finger. Assuming you’re in the right area, a cursor appears on the screen. Just like the mouse, you may take a moment to find it (visual feedback).

(2) You move the cursor towards the button. You’re relying on your sense of proprioception and watching the cursor to see when its in the right place. Each motion control application you’re using has slightly different calibration, so it’s difficult to get a reliable sense of motion.

With in-air gestures you rely on sight and proprioception to help guide and steady your hand.

(3) As your cursor crosses the boundary of the button, it highlights – telling you it’s an active interface element. You hold your hand steady in the air over the button. It’s relatively large, so it’s not too hard.

(4) You push your finger forward, watching the cursor to make sure you keep your finger steady pointing at the right item, making small adjustments as you push forward. As you move forward, an inner circle on the cursor grows to meet the outer circle, signalling a “click” (visual feedback).

Dynamic on-screen feedback is critical to communicating system state.

(5) When the two circles meet, the main content area of the webpage flashes white, the button you just pressed transitions from a light background to a dark one, and a small spinner appears next to the name of the browser tab (visual feedback). Again, this confirms that your input was properly registered. You drop your hand, relaxing the joints.

This flow seems quite useable, but challenges crop up when something along the line doesn’t work properly. What if you don’t see your cursor? Is your hand simply too low or is it too far to the right or is the device not working? What if when you push your finger in to “click” and the click doesn’t happen? Are you performing the gesture wrong? If so, how? What if the website doesn’t take the click? Are you gesturing wrong or is the site at fault? Does this website even support this motion tracker?

It’s up to the developers of motion control software to provide users with the answers to these questions. This is where constant dynamic feedback can be a very useful tool. Don’t underestimate the value of good audible feedback either. “Pops” and “clicks” can lend a sense of physicality and don’t require your player to be focused on any individual on-screen element to be useful.

Uncharted Territory
Working with emerging technologies can be risky, difficult, and rewarding. A lot of times it feels like stumbling in the dark, but each step forward defines the fundamental methods and best practices for interacting with games and media in the modern age. There’s an opportunity to build experiences no one has before. It takes a lot of prototyping, experimentation, and iteration; which as game developers we’re intimately familiar with. With games as a sandbox for experimentation, we’re well-positioned to push this medium forward in new and innovative ways.

I’m very excited about this modern explosion of motion controllers, and I hope you’ll be a part of the journey to explore this growing design space.

Much thanks to @pohungchen@alexcolgan@binxed@katescarmitch@plainspace@cabbibo, et. al. for their input, editing, and design work that led to many of the learnings in this article.

The following was originally posted December 16, 2013, on the Leap Motion Blog as Rethinking Menu Design in the Natural Interface Wild West.

Over the years, traditional menu design practices have developed along the lines made possible by hardware. Unfortunately, as a result, many of these practices don’t apply to apps built for natural user interfaces (NUIs) like the Leap Motion Controller – so that creating great menus is an ongoing challenge for Leap Motion developers. Recently, by experimenting with some alternative approaches, we’ve managed to push past these growing pains and overcome some of these hurdles in menu design.

Why menu design in particular? It’s rarely the most interesting thing about an application, but accessible and usable interface design is absolutely critical to creating an app that people will love. Developing for an entirely new type of interface means that we’re faced with a veritable wild west – as full of possibilities for growth as it is with enticing pitfalls and hidden vipers.

We recently addressed a number of best practices for menu design in our documentation. These guidelines will continue to evolve in the hope that they’ll help jumpstart your own menu designs, help you avoid some of the common pitfalls we’ve seen (and occasionally fallen into), and suggest some ways you can help navigate the natural user interface wilderness – which is pretty exciting.

While designing a new application, we burned through a bunch of different designs for some very simple 2D menus and structures. We only had a short time for this particular round of prototyping, so we focused on single-finger interactions. Ultimately, we ended up focusing on one menu, which we call a marching menu. Afterwards, we wanted to share some of what we learned during the process. Keep in mind that these are super-rapid prototypes – there’s no aesthetic treatment here, just enough to test the usability of the concepts.

Want to try it out yourself? You can download our Unity3D project (much cleaned up from our prototyping) with the marching menu prototype on BitBucket.


The final marching menu wireframe prototype.

When testing our menu prototypes, we’ve found it useful to couch our analysis in terms of three stages of menu interaction:

  • Activation: how do I bring up the menu?
  • Navigation: how do I find the choice I want?
  • Selection: how do I confirm the choice I want?

To throw a wrench into things right off, we’re going to look at these in reverse order.

Selection: how to confirm the choice I want

To tap or not to tap? That is the question. Now completely intuitive thanks to touchscreens, the screen tap gesture is an obvious first choice for Leap Motion apps. The user is using their fingers just like on a touchscreen!

In reality, screen tap actions for Leap Motion are often passable, but not great. Users don’t get any tactile feedback. It’s hard to provide sense of visual depth, so screen tapping tends to require a lot of uncomfortable extended pointing. Because of these reasons, users are not often acutely aware of where their hand lies in Z-space.


Users aren’t generally accustomed to paying attention to the Z-axis.

Plus, since human joints move in arcs rather than straight lines, it’s often hard for someone to perform a Z-space translation while maintaining their X,Y location. Needless to say, this is problematic when they need to move forward and stay centered on a button. You can do a good job following our menu design guidelines(which include some great live JavaScript code examples with LeapJS), but the screen tap gesture isn’t always the best way for users make a selection.

So how can we do better?

X,Y translation with boundary crossing can be better


An X,Y boundary crossing.

While developing these prototypes, we worked on the hypothesis that crossing a boundary in the X,Y plane would be more comfortable and useable than tapping in the air. Users are much more attuned to where their hands are relative to the plane of the screen, and it’s quite easy to give clear X,Y feedback on a 2D display. This approach also allows for menu designs that reflect what users are used to in traditional applications.

Set selection actions apart

We discovered that users are easily confused between when a selection leads to a submenu vs. an actual decision. To resolve this, we created very different actions to distinguish navigational selection from item selection. In our marching menu prototype, we used hover (with a tiny delay) to bring up submenus, and an X,Y boundary crossing for item selection.

image    image

Note the different feedback mechanisms in play between selecting a sub-menu (hover, left side) and selecting an option (translation, right side).

Navigation: how to find the choice I want

Navigation design depends a lot on your content and its structure. Naturally, many traditional usability best practices are important, including sorting your content, laying it out well, and making it easy to access commonly used items.

Good menu navigation principles

  • users can assess their options quickly
  • provide good breadcrumbs so that users know where they are, and how they got there
  • make it quick to navigate
  • make it easy to back out or undo a mistaken action (while crucial, this principle is often overlooked)


This menu layout makes it clear where you are in the structure, and how to reach other options.

Choose the right format for your content

Basic radial menus are great for small data sets and icons, but no good for quickly scanning text. In the Western world at least, users read top-to-bottom and left-to-right. If your users will need to scan through their options easily, your layout should account for this. As you can see, even with a small number of items, the left menu below is much easier to scan than the right:

image    image

Definitely consider readability when laying out your content.


3D Geometry uses a menu very much like our marching menu prototype to good effect, allowing users to quickly and intuitively access a wide variety of polyhedra.

However, for the proper set of content, radial menus can be perfect:


Brush Size lends itself almost perfectly to a radial menu in the clay sculpting appFreeform, because it represents a spectrum rather than a set of discrete options. You can twirl your finger around the menu and push out the option you want to select.

Give dynamic feedback about the actions your interface is tracking

In an early prototype of the marching menu, we used static icons to inform users what sorts of actions they could perform. However, users often didn’t associate these static icons with actions. Some people tried tapping, others pinched, and others were just plain confused. When users did make a selection, sometimes they didn’t know how they’d done it.


Original static feedback. Users didn’t associate the icons with actions.

For the next iteration, we added some simple feedback, emphasizing the X-axis location of the user’s finger relative to the currently selected button. In testing, users picked up on what they were supposed to do quite quickly, but were often uncomfortable flinging their finger off a menu item to select it. There was no indication of what would happen if they did.

With a little added polish though, the new users we tested with understood and started using the menus with ease. We saw a real marked improvement from some very small changes – design elements that reveal the menu’s behavioral structure at a glance. This sort of basic feedback is great for all manner of menus.


Adding immediate hover feedback and X-axis feedback added to the usability of the menus. Instead of being forced to remember how each level works, the menu actively informs users as they navigate through it.


Allthecooks Recipes uses this sort of dynamic feedback well – making it clear which element is being activated and which actions you can take. Read more about Lucas Goraieb’s work on AlltheCooks Recpies’ menu system in his post Pushing Boundaries.

Be aware of momentum

One of the challenges with using X,Y translation for selection and navigation is the user moving too far and accidentally jumping to the wrong item. If the user has to change their direction of movement – or has a large, obvious space to pause – this will greatly reduce the chances of accidental selections, cancellations, and the like. Giving users insights into what to expect next can help them adjust how quickly they’re moving, while forcing a directional change leaves room for natural motions rather than more stressful, constrained ones.


In the marching menu, changes in momentum flow naturally from the menu structure and obvious stopping points.

Activation: how to bring up the menu

Deciding how your menu will activate depends a lot on your application. The biggest thing to pay attention to is making sure that users understand the effects of their actions within your app.

In our testing, we tried to activate a contextual menu by having the user pull their hand back towards them. We often saw people accidentally activate the menu, but rather than discover the action, they didn’t realize how they’d turned the menu on! Even worse, they’d often think that something else had caused the menu to activate, teaching them the wrong thing.

We also had some menus which, when activated, took up a large portion of the screen. Without proper transitions, users would often become disoriented. With larger popups, you need to strike a balance between keeping a large interaction space and maintaining a sense of place in your application.

We’d have loved to have spent even more time playing with our concepts – and we will – but naturally we only had so much time for this round of testing. Here are some ideas we’d love to test more, or see from design-minded developers:

A multi-level radial menu. Just imagine a radial menu that can handle many levels of data while still having good bread-crumbing, good ways of backing out to select other information, and an obvious difference between selecting sub-menus and making a final selection. We got very excited about how quickly power users could navigate with tiny gestures, but a lack of usability stymied our prototype. A random thought on this to get you thinking – rotary phones?

A gesture for menu activation. The one “gesture” we played with to activate our menu was pulling a hand back towards the user in z-space. This didn’t work so well. We’d love to see developers explore this one more. The biggest challenges here are a gesture that is unique enough not to accidentally occur during normal use, while also easy enough to be learnable and reliably activated. We’ve recently seen a few really cool ones around the office (especially two handed gestures) but there’s a lot of cool stuff to explore on this topic.

A depth-based solution to multi-level menus. The idea that deeper levels sit behind top-level content makes a lot of sense, and reaching in to pull submenus out is an enticing possibility. We’re only just getting started.

We’d love to hear your ideas around interaction designs for the Leap Motion Controller. What are some other ways to think differently about accessing options, browsing content, or creating something new? Post your thoughts in our thread on the community forums.

Last weekend I jumped in and participated in the 2013 What Would Molydeux 48-hour game jam. I was a participant in the 2012 jam, at the SF site, which was an amazing experience. Tons of great developers from across the Bay Area got together to make games based on the tweets of @PeterMolydeux. Molyjam 1 was probably a once in a lifetime sort of thing, and how it changed culturally was detailed really well by organizer Brandon Sheffield in his own post-mortem.

With Sifteo, I spent a lot of time running and supporting game jams so I’ve gotten to know a lot of folks in the bay area jammer community. Due to space limits Molyjam split up the bay area into three sites, two in SF and one in Oakland. It sadly meant a lot of my friends were at different sites. Next year I’d love to see a single combined space as it makes for much better buzz and social space, but logistics like that are super hard for the organizers.

The Sifteo February Game Jam

The Sifteo February Game Jam

Friday Night

Myself, along with a couple of folks I’ve worked and jammed with before got together and started pitching ideas about. Ian Guthridge and Michael Downing, both experienced game makers, and former co-workers both wanted to work together. Both also wouldn’t be able to work on Saturday which was fun little caveat, but not a big issue. Looking at the “inspirations” we’d been provided for this jam, we talked about a few games. Downing had a solid idea for a PR sniper game where you had to censor yourself. There was another concept for a game based on players’ childhoods that seemed cool but would have been a bit of a content nightmare.

Our selected "inspiration"

Our selected “inspiration”

After a few hours of joking and pitching we landed on an idea for a reverse snake game based on the quote, “You sneak that little thing in there at the end.” We started with the Osmos mechanic of dropping a bit of yourself to get smaller and move but having to avoid the pieces you dropped. We messed with it a bit and it became reverse snake, having to break yourself apart, but creating smaller chunks. We played with maybe having you able to eat smaller chunks to remove them, or maybe changing their velocity on how they moved. We knew we’d want different behaviors for different chunks. At this point we figured we had a pretty good idea. Downing and I started on aesthetics (Michael was gonna be doing all our sounds and music). After a little bit we decided to go work from a wine bar near the site. Nice way to jam if I don’t say so myself.

Ian said he’d code up our core snake movement and breaking apart. Ian and I have used Flixel before so Ian put the core together with that. I worked on art style till about 2am. I think Ian worked till 4 or so. Around 7am I had an email in my inbox with a working snake mechanic: goal, snake breaking, timer, and movement.


Ian's Saturday morning prototype.

Ian’s Saturday morning prototype.

I’d been messing with a few different art styles. Michael and I really liked the idea of referencing Alexander Calder’s work. He’s been a favorite of mine for a long time. I worked out a style based on his work, another based on origami, and a third based on simple bubbles. The first was the team’s favorite, but the snake chunks ended up needing to be a lot smaller than I had hoped. The simple bubbles ended up looking the best.

A Calder mobile

Friday night's art test

Friday night’s art test

Ian left me with a nice raw set of functionality, but it wasn’t fun yet and it was definitely not a game. I spent a lot of time Saturday working out making the game feel fun. Getting the speed, spacing, and chunking feeling good took a solid three or four hours. In between working on that I got initial art integrated (which is when I realized my first art style wasn’t going to work). I also started work on new AI behaviors, and a game timer that better matched our aesthetic.

Around lunchtime Saturday.

Around lunchtime Saturday.

From here there was a lot of tuning. I played with having almost 70-80 nodes where the player would be inundated with new chunks. It looked pretty impressive, especially when the AI behaviors kicked in. In that version I built a mechanic where the player snake could eat other nodes. The idea being that you’d break yourself up and then start eating chunks to make the game manageable. This was a direction, but I didn’t dig it too much. It wasn’t fun yet, but I could tell now that we could make it so. We just needed to do the right tuning. Plus it was at least starting to look at like a game!

Saturday night's alright.

Saturday night’s alright.

Ian showed up around 10pm and we started looking at our status. We both agreed that eating wasn’t the direction we needed to go. We decided I’d code up a version where when a player broke apart their head would always be on the longer section. Other nodes would always be in the play space, and the player would simply have to avoid them. It was more elegant, and based on how the game played, probably more interesting. Ian would work on mouse controls (up till now we’d used keyboard arrows) and added a bunch of fun extras while he was at it. I worked in some subtle visual effects like objects casting shadows on the game board plus some animated noise and sepia filters.


Sunday morning madness

Sunday morning madness.

Sunday was a ton of fun. Ian, Mike and I all met up at a coffee shop in the morning and started working. We got caught up on what we’d all been working on and started more tuning and UX. I got to work on animating a goal and fixing some outstanding bugs. I was looking at a painting by Kandinsky while working on the goal. Always good to draw some inspiration from the masters.

Kandinsky – Farbstudie: Quadrate mit konzentrischen Ringen (1913)

I asked Ian to do some dynamic line drawing between nodes and he was even able to get a nice sketchy animation going just like the rest of our animations. I swear, I do love working with other professional game makers. There’s really something to be said for working with a team you trust to get the job done. It meant that working Saturday alone was fine, and Sunday we simply set out a goal and the team knew the tasks they needed to do, when to speak up and ask questions, and when we needed to meetup for a longer discussion. It is also really cool when all the art, music, and code start flowing in. Running “git pull” becomes a pretty exciting thing.

Speaking of music, `checkout` (see what I did there?) the music Michael Downing did for the game. The track in the game changes dynamically with the gamestate and integrates nicely with our sound effects for a pretty great audio experience. I particularly like the build towards the end that takes the game from ambient and relaxing to much more intense.

I decided to go for a hand animated scheme kinda inspired by things like Dom2D’s recent JS Joust animations or some of what you see from http://www.untoldentertainment.com/, but a bit more subdued than either of them. I had an illustrator workflow that though unrefined, gave me a passable style I liked. I did some subtle animation on the background (probably too subtle ‘cause even I miss it sometimes), some variation on the clock from frame to frame, and the animated goal. It’s been awhile since I’ve done any drawing or animation for a game so I had a lot of fun getting back into it. It was all very graphic design-y which is more my forte anyway. I even had enough time to rework our color palate a bit. Again drawing on that Kandinsky painting.

Final colors + Ian’s great explosions and lines

Final colors + Ian’s great explosions and lines

In the last few hours we put a lot of final touches on the game. Michael and Ian spent a lot of time refining the sounds and audio code. Flash does some fun things when you’re trying to loop audio, so they had to work around it. While they were working on that I worked on our logo along with our start and end screens. I had planned out a fun interactive first time user experience that would teach the controls and the game’s rules, but it was a bit more than I thought I could have polished in the last few hours of the jam. So instead I did a static start screen along with pretty minimal win and lose screens. I think they all come together nicely.

The instructions screen has the nice animated shadows and visual effects, which I like a lot.

Instruction Screen

Instruction Screen

I think the last commit went in right around 6:55pm, with the jam ending at 7. We’ve become pretty good at scoping out what we can do in 48 hours which is nice. Though I’m starting to itch a bit to do some longer jam style projects. We may even take this one a bit further. There’s a few cool ideas we’ve had for the mechanics.

In the mean time you can play the game here.

The presentation went well, and there were a bunch of other fun games shown. I’ll be looking forward to doing the jam again next year. In the meantime we’ve got iamagamer coming up; another Ludum Dare around the corner; global game jam early next year; and I’m sure more jams will show up in the future.

What Went Well:

• Experienced team trusted each other to get the work done and knew how to scope.

• Engineering and sound took Saturday off. Design got to work on iterating all Saturday without blocking others.

• Early discussions of aesthetic kept us all on the same page.

What We Could Do Better:

• We weren’t crazy about the theme, I loved what we came up with but the game does feel a mite generic right now.

• I could have done a bit of prep ahead of presentation. I was a bit off the cuff and could have shown the game better.

• Based on the team next to us, we probably need more champaign next team. Seems to help the creative process.

A few other Molyjam SF games worth looking at:

Bender: the Unconditional Liver – why we need more champaign, also these folks rewrote their entire game Saturday night when their first engine didn’t quite pan out.

The Rouge Less Traveled: The Legend of Rand(3,7) Gems – Easily the best named game of the jam. Also a generally fun twine game.

Haughty or Naughty – I really enjoy the premace of this UGC game.

A week or so back I came across an old notebook from back when I lived in Atlanta where I’d scribbled a bunch of thoughts on social networks, my issues with trends, and where I thought things might be going. Looking at those thoughts a few years later was cool. I’m happy to say I got a few things right, like a push for more meaningful content in smaller communities (quora, branch, etc..). Though I absolutely missed things like the rise of Tumblr.

Finding that notebook kinda meshed with some interesting thoughts I’ve had of late about how I use social networking these days. The interesting thing that I’ve been playing with is that my favorite social network right now is Skype. Skype, despite being a chat client, has all the hallmarks of a social network. There are connections, group feeds, private messaging, media, comments, and archiving.

The thing about Skype is that content targeting and organization is ad-hoc. This is opposed to most modern social networks that rely on pre-built groupings for connections. Facebook has it’s “lists” and “groups”, google+ has it’s “circles”. But what these assume is that static grouping of people will make sense for most use cases. I think that’s a fallacy. My life is often ad-hoc, and my needs for groups are dynamic. Neither the structures or the interfaces provided in mainstream social networks support the ad-hoc use case.

As an aside, the best model for this ad-hoc targeting and privacy I’ve seen in action is actually facebook invite-only events. I’ve seen some really interesting dynamics pick up with event pages used as small community focal points. Pages for small brands with high engagement see similar dynamics. Its probably telling that the model I like best is often used for planning real world get togethers / networking. Get togethers with people of similar interests who don’t know each other are some of the most enjoyable around. Its always great to meet new people who are like you. Can we extend that experience to the social web in new ways?

I find there are two primary modes of communication I want to have on the social web. The first is broadcast. We’re really good at broadcast. This is my twitter feed, my facebook feed, and my blog. I’m creating content and blasting it out into the ether. Sometimes I add some targeting to surface it to specific people, but by and large it is a public broadcast. It is opt-in content that people seeing it have “tuned-in to”. Its loud, viral, low-friction, and often returns nice, if shallow, bits of social feedback in the form of likes and comments. As an online community we love it.

There are of course downsides to the broadcast method of communication. There’s a bad signal to noise ratio, and content, even in aggregate is really ephemeral. We move from one item to the next very quickly. Often we see trends and memes, but miss details, depth, and connections to others. Hence we’ve seen the rise of services trying to filter and aggregate this content. I wonder if theres a better option.

The second mode of communication is when we only want to talk with specific people, or a group of people. This one is actually really ill served. As I said above, most social networks deal with this via groups. On the surface this makes sense, “work friends”, “close friends”, “family”, “church friends”, “gaming buddies”. When asked to categorize people these are the common connections we make. Where do we know someone from, how do we connect with them in the real world? When we get to sharing and discussing content on the web though, these groupings break down. Interests only sometimes line up with our real world behaviors.

An example piece of content. I’ve got this esoteric, racy, experimental music/interactive work from some burning-man-esce scene. I love this sort of thing and love talking about it with like minded people:


  • My family and many of the people I know from work probably aren’t that interested. Some people might even be offended or just find it noise, so I’m not going to broadcast it. I’d rather not waste their time.
  • I know I want to send it to a specific group of people. I’ve got my brother, three friends from work, a friend from an old job, a friend I met at a party, an old college professor, and a bunch of my friends who went to college with me who I’d like to share this with. Not a group I’d have pre-made.
  • Where I met these people and where I see them in real life don’t really apply here.

What I need is a quick, ad-hoc group, relevant to the content I’m sharing. I also need a way of adding them quickly. With this content, what I’d probably do right now is throw it on my brother’s facebook wall or just link it in one or two of my skype chats. But what if I instead had a good way to send the content to a larger, defined group of people and link them all together in a more private conversation. I think theres a chance for much more interesting discussion happening here. Forums and other permanent online communities sorta cover this but theres a lot of overhead there.

So what does a user gain from sending to a smaller, tighter group. It seems to me they create a much better environment for tight feedback loops and conversation with those people. It also significantly improves the signal to noise ratio. Thats why I love using skype. Instead of selecting from a large predetermined group of people (though I can do that with some of my chats), I can grab a few people, throw them into a chat, and send an item out. I often have ad-hoc skype chats grow, live, and die over minutes, hours, or days. It is an easy flow, and allows my experience with the app to mimic the dynamics of my life.

I have some more thoughts on how we might implement these sorts of interaction but for now I’m going to leave it at this. Would love to get others thoughts on the topic.

A few weekends back I participated in the WhammyJammy 48-hour audio game jam here in San Francisco. A game jam, for the rest of the tech world I guess, is usually called a hack-a-thon. The basic idea is, a bunch of folks get together, take a theme, and then have 48-hours to make something playable. Whatever you call them, game jams are a ton of fun and are a great way to work on new things, learn, and meet cool people. They’ve become pretty popular in the games world of late, especially with the recent indie game creators revolution. There are so many tools for rapid development now days that creating really great games in 48-hours is actually pretty realistic assuming you know what you’re doing.

Folks playing "Bass Jumper"

Folks playing the completed “Bass Jumper” game at the end of the jam.

Whammy Jammy was organized in part by Steven An, who I met at this year’s IndieCade conference. Turned out we were both San Francisco area game devs and he let me know about the jam. Before WhammyJammy I’d done two Global Game Jams, SCAD’s 24-Hour Generate Jam, MolyJam, my own 48-hour Ludum Dare prep jam, and Ludum Dare 23 so suffice to say I’ve had a fair bit of rapid development experience. I haven’t gotten to do as many SF jams as I’d like and I’d had a great time at MolyJam meeting other Bay Area game creators so I had high hopes.

WhammyJammy’s turn-out was pretty small compared to the 150+ participants I’m used to from Atlanta GGJ events but the size turned out to be really good. PinchIt.com generously lent us their beautiful office space in SF’s Mission distict right next to the BART station. We started out generally meeting everyone and pitching around a bunch of audio game ideas. After an hour of or so of chatting and pitching,we broke up into groups and and started brainstorming more on ideas that had some heat.

Pitching Ideas Around

Pitching Ideas Around

I had mentioned wanting to do a physical space game. I’ve been inspired a lot in the last year by Doug Wilson’s J.S. Joust work and a lot of the physical games work coming out of the NYC Indie scene and have wanted to spend a jam messing with a physical space game. There were some other folks with interest so we came up with some ideas:

Jump Rope Game:
We were playing around with the idea of accelerometers on jump-ropes or maybe a tug of war game with generative audio events. Maybe a team that pulls towards high pitches and one towards low. We had a lot of heat around this idea but couldn’t solidify it into a game we were happy with.

Hunter Hunted:
One of the sound designers on the team told us about a folk game kids at a summer camp he helped at played a lot called, “hunter hunted”. The kids would form a circle around two blindfolded players and an “object of power”. The first player to find the object of power and hit the other player with it wins. We pitched around some ideas for using a soundscape to enhance the game. Maybe tracking players and generating panned audio but nothing stuck and tech constraints were gonna be a bit high for 48 hours.

We ended up with a pitch for a rhythm game without dance pads. Players would stand around a central point (we really wanted this to be a large bass amp) that would shoot notes out to player. Players would want to jump as high and land as hard as they could while remaining on time so score the most points. Pushing shear strength and airtime over the dexterity associated with traditional rhythm games.  We ended up deciding that considering the 48 hour scope this was our best bet.

We all broke off onto various tasks. I pulled a bunch of code I’d written for another physical computing project. That came in three large parts. The first was the actual arduino code for interpreting the accelerometer data and sending it to the PC over a serial connection. The next was code for grabbing the data from the arduino and doing a bunch of signal processing and display so we would understand what the actual data looked like and come up with the right processing and analysis. And the final piece we pulled from me was the socket server bridge between processing and flash so we could build the game quickly and have it running on Mac and PC with low friction. We didn’t know who’d be running the final version as we’d yet to solve audio output. Few of us had tried to output to a bass amp before.

Jim Crawford of “Frog Fractions” fame, did most of our gameplay programming. He and Eliot Lash (who was great jumping from task to task as needed, hitting front-end, back-end and hardware) worked together to integrate the processing-flash bridge from my (hacky) example code into the game. He also worked with Sam and Brett, our audio guys, to come up with a snazzy MIDI based encoding system for all our gameplay note hits for each player.They could simply create a MIDI track along side their audio track with all of the gameplay data for the song. It worked out really well and by the end of the jam we had a few tracks they composed as well as some tracks they beat mapped for the midi system. Can’t say how great it was to get to work alongside audio folks again. Reminded me of my audio engineering work back in Atlanta.

Steven did a great job refactoring my hacked together signal analysis code, making it far more performant, and even spiffed up the front-end of the processing script to make it read far more easily and made it work great with multiple arduino inputs. By the end of the jam the processing side of things was running great and the updated front end helped out a ton with debugging. Steven tried a bunch of different signal analysis tricks throughout the jam. We regularly got to watch him jumping up and down with an accelerometer while looking intently at a data monitor.

Sam and Brett

Sam and Brett playing one of the other jam games

Sam Ballard and Brett Shipes were our audio pros for this one. They both had a ton of great ideas and feedback during the early concept stages and really did some terrific work once we hit production. I can’t pretend to understand all the ins-and outs of their work but they both composed some fun songs and worked to make sure everything worked with our gameplay. The accelerometers had trouble with rapid jumping so they worked their songs out so they could have spaces between the jumps for each player and fast sets were spread amongst other players. Brett kept having his equipment blow fuses which speeks a bit to old SF wiring but he was able to work around it once we figured out what was going on. And if I recall Sam beat mapped the JoCo cover of “We Will Rock You” which makes me very happy.

For my part, I worked closely with Eliot to figure out the best way to mount our accelerometers and arduinos. I wanted to mount the accelerometers close to the microcontrollers and have players pocket the whole system, tethering them to the computer with long USB runs. Eliot was concerned there was a high chance of shorting out the arduino, especially considering our breadboard wiring. He proposed that we make long wire runs with the accelerometers soldered on the ends. Was a fair bit more work, but after some hacking, ended up working nicely. We didn’t crush any electronics the whole weekend.

My other bit was mounting the projector. We really wanted to vertical mount and project onto the floor but we were fighting low ceilings and didn’t have a good way to mount onto it anyway. I toyed with the idea of mounting on the fire escape and playing on the street below but that seemed to offer more problems than solutions. I ended up mounting the projector at a really extreme angle on the top of a bookcase. Picked up some bungee cords Sunday morning and used them to keep the projector nicely mounted. I was actually really surprised how well it worked.

We demoed the game Sunday evening and not only did it work, we actually had a lot of fun playing it. Using accelerometers over dance pads was probably not the best idea gameplay wise due to bits of inaccuracy but I think we all learned a lot and there’s definitely a certain magic to just jumping on the floor and having it just work. Next steps would be to go wireless and make the accelerometer data more useful. Overall though it was quite a bit of fun and I know I learned a lot.

Playing games at the end of the jam

Playing games at the end of the jam

Some of the other projects that got presented were really cool too. One of my favorite parts of a jam is getting to see what everyone else produced in the time. There’s always a certain camaraderie with everyone going through the same 48-hour insanity together so no matter what comes out on the other end you want to celebrate what everyone else has done.

Steven put together a great little video of some of the games from the jam. You can watch it here:

So if you know anyone who’s been to art and design school or who’s a practicing artist/designer you’ve probably noticed that we see the world through a bit of a critical lens. Some of us have gotten our leses stuck.  Whether it’s your friend who feels ne need to comment on the type in every single ad or movie poster (yep, me), or the guy who complains about the push/pull affordances of the door handle at work (guilty again actually). My friends, professors and I actually have a bit of a game of spotting particularly bad, weird, or just plain wrong designs or implementations and plenty of pictures get MMS, email, and shown around during class.  Here’s a recent one I’ve found in the parking lot of a small strip center that we SCAD-Atlanta students often frequent (Panera and Willies only a 5-6 minute drive from school).

Sign - Do Not (Leave) Valuables In Vehicle

Does someone want to explain this to me?

Sign - Do Not Leave) Valuables In Vehicle

Oh it only gets worse. Found this one behind R. Thomas Grill.

Testing Analog Games in Cognitive Art of Game Design

Standard scene in an @jofsharp or @ravenborne class.

That’s one of the project’s I’m working on in @ravenborne’s class in the foreground there. @ckalenda (thats him in the front left) and I are experimenting with ways to do a physics based space combat miniature game. Think the sort of maneuvers you’d see in Battlestar or Babylon 5. Right now the game is being play’d on a whiteboard. Players note all of their states and vectors next to their ship’s models. We’ve been iterating on the mechanics a bit and we’re pretty happy at this stage. Taking a break from my spreadsheets and illustrator documents right getting ready for tomorrow’s battery of testing right now. I’ve been meaning to rebook my writting efforts for a while now and it seems like a good time.

You see I’m about to get my BFA (that’s Bachelor’s of Fine Arts) in Interaction Design and Game Development and move out of state for the first time. Heading from Atlanta GA to San Francisco CA to go work as a Game Designer. Its kinda funny when a plan comes together, though hardly in the way I had planned. But the long and short of it is that in the last year I’ve begun to feel like I actually have something worth writing about, my own perspective, and I think the ability to add something meaningful to the general conversation once in a while. I talk about game design, art, history (on occasion art history), design, programming, and sometimes science, math, and other general miscellany.