Raised by Savages

Explorations, Things!

expedition!

Evan and I recently received official word that we are now the proud co-founders of an LLC: Savage Internet. We have yet to nail down an official mission, but we’re hoping to look at the space of educational technologies. In particular, many online education platforms (e.g. Khan Academy) focus heavily on STEM fields, since these have very simple processes and easy-to-grade right or wrong answers. But what does a MOOC (Massive Online Open Course) look like where the goal is to teach art, history, or culture?

We’re starting to explore that question with thr prototype we are launching in invite-only alpha. Our prototype is called expedition!, and the basic idea is to have a platform for people to ask and answer questions about the world.

A simple scenario is this: Evan and I have a friend named Anami who is a writer. The current novel she’s working on features airships, and she’s been doing piles of research on the various mechanisms of flight, both past and present. She had heard of the oldest air and space museum in the world, which is located in the northeast suburbs of Paris. She didn’t really have an opportunity to go there herself, but she knew that Evan and I were heading that direction for a conference. She asked us to photo document the museum for her, and bring back any ridiculous stories we felt were noteworthy. So we did.

We’ve implemented some basic game mechanics (badge systems, XP, etc.) for expedition!, but we’re not totally sure what the right mechanics are. We’re also kicking around ideas for a real-time component of the system (where, for example, we could chat or video chat live with Anami from the museum and probe deeper into things that she was really interested in). If you have any thoughts on the alpha, or if you want an invitation to play around with it, do let us know. :)

Evan and I will be implementing features and testing stuff out on our trip this summer to Southeast Asia. You can follow us on expedition! or on our new blog: Ramblelust.

fraiche

Continuing my series of “things I’ve done in class before”, I wanted to write about the project I worked on with Jonathan Kummerfeld and Peggy Chi. This project was an extension of the aforementioned H2O IQ: we were exploring what could happen if we move from one device in a single user’s garden to many devices scattered through a community garden or small farm, all of which were being served by a single Raspberry Pi.

We had a few different tasks to balance:

  1. Serving client requests for data via the web interface
  2. Accepting client instructions via the web interface and forwarding them to the in-garden watering devices
  3. Receiving updated readings from in-garden sensors
  4. Performing machine learning on the sensor readings and watering instructions

We set up three scenarios to test in:

  1. Single user : 1 client, 3 plants
  2. Small farm : 1 client, 50 plants
  3. Community garden : 50 clients, 150 plants

and we had six different scheduling algorithms to try:

  1. naive - update the watering model as soon as a client makes a request for data from that plant
  2. periodic offline - every 5 minutes, update all watering models
  3. sensor-triggered - when new watering data are sent from the in-garden devices, update those watering models
  4. hybrid - a mixture of the periodic offline scheduler and the sensor-triggered scheduler
  5. low load scheduler - this scheduler watches client traffic, and when there are fewer than 15 client requests in the preceding 5 minutes, all watering models are updated
  6. predictive scheduler - a second machine learning algorithm watches the pattern of client traffic: just before a peak is predicted to occur, all watering models are updated

We wanted low client latency and “fresh” models. We defined model freshness to be time_at_serving - time_at_last_update.

Our basic experimental setup included one Raspberry Pi and one remote Mac. We had the Mac running Phantom JS to simulate client requests and also to submit sensor data via a web endpoint. Our webserver and schedulers were all written in Python using Tornado.

The results were promising!

Freshness per scheduler in each scenario

Latency per scheduler in each scenario

There are more details in the paper we wrote, but suffice to say for now that we were happy with the Raspberry Pi’s performance in general and that it would certainly be feasible and reasonable to use it as a machine-learning webserver in a Fast Response And Intelligently Controlled Harvest Environment. As usual, the code for the project, as well as some documentation, can be found on github.

H2O IQ

As a little series of stuff, I’m going to write retrospectives on several projects I’ve done at Berkeley. I’m going to start with the recent stuff while I still remember it.

In Fall 2012 I took Björn Hartmann’s inaugural class on Interactive Device Design, co-taught with a member of the Mechanical Engineering faculty, (Paul Wright)[http://www.me.berkeley.edu/faculty/wright/] in the spanking new CITRIS Invention Lab. The goal of the class was to educate the software-inclined in physical manufacture and the physical-inclined in software. Students turned out from a few different places, with the makeup showing grads and undergrads from EECS, Mechanical Engineering, Information, and Biomedical Engineering. We worked through several homework assignments involving implementing games and basic controls in software and hardware, as well as performing basic modeling tasks in SolidWorks.

Ultimately we split into groups for our final course projects. I had the pleasure of working with Shiry Ginosar from EECS and Mark Fuge from ME. We had a series of discussions about projects that interested us, and the theme we ended up hitting on was physical devices that had some kind of “smart” configuration aspect, for example a pair of shoes with dynamically inflatable inserts that can adjust to the user’s stance, adapt when he changes from walking to running, etc., to reduce pressure on particular areas of the foot. These types of self-configuring products are becoming more feasible as internet connectivity becomes more ubiquitous and cloud processing cheaper, just think of the Nest Thermostat.

The project we finally decided to pursue was a water-conserving gardening tool, which we call H2O IQ. Inspired by water shortages in both California and Shiry’s home country, Israel, we set out to investigate what it would take to build a device that can be planted in the garden, attached to a drip irrigation system, and forgotten about.

It turns out, not that much. It also turns out that’s not what people want. As a part of the project milestones we were required to do preliminary user interviews. As it happens, gardeners enjoy the time they spend in the garden, and none of them liked the idea of a computer taking over. So much for our machine-learning self-adapting dreams.

Instead, we headed in the direction of a notification system. Our garden device would share soil with a plant and alert its owner when the moisture was too high or too low, prompting action on their part. It could also, of course, serve as a backup watering device in the case the owner is on vacation or otherwise unable to water.

We ended up with H2O IQ, which looks like this:

And works like this:

The 3D printed piece has a solar panel on top for power, a servo for controlling water flow through the irrigation system (connected at the nubs), an XBee for communicating with the Pi upstream, four buttons for changing the watering schedule in-garden, and a moisture sensor built from cast plaster.

The Raspberry Pi functions as a webserver, and can communicate with the device via XBee radio. It serves a website that allows the user to see a graph of the watering history of the plant, as well as perform a one-time automatic watering or set a schedule of future watering.

We didn’t actually have time to test the product in the course, but we made a final presentation at the end which you can see on Google docs. All of our code, SolidWorks models, and Eagle files for circuitboards (as well as a teensy bit of documentation…) can be found on Github.

It was an interesting course project that led into some territory I’d not explored before, but which I suspect I’ll need as I delve deeper into the world of hardware.

FAB at CHI

Welcome to my blog! The following is a draft of a post under submission to the FAB at CHI workshop happening this year at CHI 2013 in Paris. My advisor, Björn Hartmann, is one of the directors of the workshop.

Prototyping Tangible Input Devices
with Digital Fabrication

Valkyrie Savage
Björn Hartmann


Tangible user interfaces (TUIs) are, according to Hiroshi Ishii, about “mak[ing] digital information directly manipulatable with our hands and perceptible through our peripheral senses through its physical embodiment”. Although touch screen-based interactions are increasingly popular as smartphones continue to sell, there are still strong arguments for maintaining the tangibility of interfaces: these arguments range from speed and accuracy (a gamer using a gaming console) to visibility (ability of others to learn and interact with one’s data in a shared space) to safety and accessibility (including eyes-free interfaces for driving). We have previously investigated the benefits of tangibility in How Bodies Matter.

3D printing holds obvious promise for the physical design and fabrication of tangible interfaces. However, becasue such interfaces are interactive, they require an integration of physical form and electronics. Few of the early users of 3D printing can currently create such objects. For example, we surveyed the the online community Thingiverse; presently it and sites similar to it show a definite tilt towards objects like 3D scans of artwork at the Art Institute of Chicago. These things are immobile, captured rather than designed, and intended to be used as jewelry or art pieces. A smaller set of things on the site have mechanical movement of some kind, like toy cars and moon rovers. A third, yet smaller, class are things that are both mechanically and electronically functional, like Atari joystick replacements. The users who dabble in this last sector are typically experts in PCB design and design for 3D printing.

“Iconic Lion at the Steps of the Art Institute of Chicago” by ArtInstituteChicago on Thingiverse “Moon Rover” by emmett on Thingiverse “Arcade Stick” by srepmub on Thingiverse

Many of the objects on Thingiverse focus on 3D printable designs. Aside from 3D printers, other classes of digital fabrication hardware, like vinyl cutters, have also reached consumer-friendly price points. Our group at Berkeley is examining how to combine the capabilities of these types of hardware to assist designers in prototyping tangible input devices, with an eye towards the ultimate goal of making hardware prototypes more like software prototypes: rapidly iterable and immediately functional through tools that are easily learned. We want to use our work to educate and excite high school students in STEM fields.

Our first project, Midas, explored the creation of custom capacitive touch sensors that can be designed and made functional without knowledge of electronics or programming skill. These types of sensors can be used, for example, to enable back-of-phone interactions that don’t occlude output while a user gives input; or to experiment with the placement of interactive areas on a new computer peripheral. We even used it to build a touch-sensitive postcard that plays songs and a papercraft pinball machine that can actually control a PC-based pinball game.

The Midas video submitted to UIST 2012, describing the user flow and basic implementation of the system, can be found on YouTube.

A Midas-powered prototype enabling back-of-phone interactions for checking email.
Midas consists of a design tool for layout of the sensors, a vinyl cutter for fabrication of them, and a small microcontroller for communication with them. The design tool takes the drag-and-drop paradigm currently prevalent in GUI development and expands it to hardware development: the designer does not trouble herself with the “plumbing” that turns a high-level design into a lower-level representation for display or fabrication. In a GUI editor this means the tool is responsible for determining pixel locations and managing components at runtime, while in Midas we create vector graphics of the designer’s sensors and appropriate connective traces. In both cases, the designer is free to concern herself with the what rather than the how. Once a designer has completed her sensor layout with Midas, instructions are generated which lead her through a multi-step fabrication and assembly process. In this process, she cuts her custom sensors from copper foil on a vinyl cutter, adheres them to her object, and connects them to color-coded wires. She then uses the interface to describe on-screen interactions through a record and replay framework, or to program more complex interactions through WebSockets.

In user tests, we found the tool suitable for first-time users, and we found interest in it at venues like FabLearn, a conference for fabrication technologies in education; and Sketching in Hardware, a weekend workshop for hackers, artists, and academics. However, it has a fundamental limitation: it only assists designers with touch-based interactions. In the larger TUI world, there are many more classes of input to be considered.

A Sauron-powered prototype of a controller with a button, a direction pad, a scroll wheel, a dial, and a slider.
Continuing our explorations, we have begun a project to enable designers to turn models fabricated on commodity 3D printers into interactive prototypes with a minimum of required assembly or instrumentation. We share a goal with Disney’s work: functional tangible input devices which can be fabricated on a 3D printer without intensive post-print assembly. Our approach involves inserting a single camera into a completed input device print. Our current prototype, Sauron, consists of an infrared camera with an array of IR LEDs, which can be placed inside already-printed devices. The camera observes the backside of the end-user-facing input components (e.g. buttons, sliders, or direction pads), and via computer vision determines when a mechanism was actuated and how (e.g. it can give the position the slider was moved to along its track).

This allows designers to create functional objects as fast as they can print them: the only assembly required to make the parts work with Sauron is to insert the camera. Our next steps in this project include developing a CAD tool plugin that will aid designers in building prototypes where all input components are visible to the single camera; the plugin will automatically place and move mirrors and internal geometry to make all components visible within the cone of vision. This again frees the designer to think about the what rather than the how as prototypes come out of a 3D printer essentially already functional.

The world is interactive, but most things created using digital fabrication aren’t yet. Through our work at Berkeley, and hopefully our interactions with the FAB at CHI workshop, we are hoping to explore the potential of digital fabrication tools for functional prototype design.