In the previous post, I suggested that “digitally-enhanced” Comics weren’t so much an “enhancement” of existing comics, but the start of something different-but-related. Personally, I find it much more exciting to think of them like this, than as some “comics evolved for the 21st century” guff. Comics are doing fine (and adapting to the 21st Century fine, thanks to Comixology, Sequential and others). Electricomics and their predecessors are something different, able to borrow storytelling techniques from comics, movies, games. And, crucially, to invent their own idioms.
In this post, I want to do a quick tour of the anatomy of this new creature. Oh, and speculate about the near future (“near” as in next 12 months) in a way that’ll make me cringe with embarrassment when I go back to read it 🙂
Rather than trying to examine tilt, swipe & friends one by one, I’m going to propose a broader classification. Digital comics can be enhanced in one of three ways, which I’ll present in increasing order of power. (By “power”, I mean how much it can alter the storytelling experience, not how much it’ll drain your batteries! I’m easily excited by techie stuff, and proposing grand classifications is an inherently techie thing to do, but I want to keep an eye on the storytelling as the ultimate goal here)
Here we go…
Type 1 : Special Effects
Using digital effects, we can introduce movement and sound in various ways, to dress up an individual panel or sequence of panels. The principle use of this is to establish mood, or sense of place. There are “obvious” uses – changing a character’s facial expression or body language to convey a change in emotion, animating a bodaciously cool fight move, watching a building/planet/giant cosmic sandwich collapse in slo-mo, and so on. We can do somewhat more subtle stuff too – parallax scrolling on a long shot can give a sense of depth, changing colours can show the sun setting. Music or sound effects can be used to add to the narrative – something well established from movies.
(Here’s a well-known example – warning, horror & sound effects!!)
Impact on Storytelling: pretty much mood/sense of place, unless my imagination’s failing me. (Looking forward to being proven wrong here)
Type 2: Presentation & Layout
Physical comics are restricted to the size of the page. You can pick a size for a book (6×9″, US letter, A4, etc., portrait or landscape), and then you’re stuck with it. Syndicated strips are stuck with a fixed amount of real estate in a newspaper. It’s possible to play with this a bit – double page spreads, switching between portrait & landscape (e.g. for several issues of Dave Sim’s Cerebus “Church & State” run), mega-sized pull-out pages (e.g. recent issue of Fraction & Ward’s Ody-C, Mick Inkpen’s “Blue Balloon” childrens’ book). There’s an episode of Alan Moore & JH Williams’ Prometheus, and a recent issue of Silver Surfer by Dan Slott & Mike Allred that both used Moebius strip structures – for example.
Early digital comics (i.e. everything that we’ve seen so far) tend to follow the fixed-page conventions of their paper ancestors, in the same way early movies stuck with a theatrical proscenium arch. In both cases, adhering to the restriction is a security blanket, that will be discarded as the new medium matures.
Scott McCloud coined the term “Infinite Canvas”, to describe this new freedom. Drew Weing’s “Pup Ponders the Heat Death of the Universe” is the go-to example of this technique, and is funny and deep with it! On an infinite canvas, the user can move around freely, up, down, sideways, to follow the story. One thing I haven’t seen explored much yet is the ability to zoom in and out. There was a powerpoint-like tool called “Prezi” that did the rounds of a place I worked at a few years, that made great use of infinite zooming to jazz up business presentations – imagine a comic strip that can be read like these sample presentations. Computer UI guru Jef Raskin, one of the people responsible for the modern GUI, via his early work at Apple, proposed a similar zoomable interface for file management on desktop computers. Google Maps and it’s ilk use a zoomable UI too, (and support a rich enough API to tell a story!!).
At the other extreme, it’s possible to increase the control over the user’s travel. The zooming panels in Scott McCloud’s The Right Number, and the regular beats of Bryan Talbot’s Metronome expanded edition, comixology’s guided view, all walk the reader through the story, one panel at a time.
Impact on Storytelling: Allows the writer to grant more, or less, freedom to the reader. What effect does it have on the reader? Guiding the reader may help to establish a smooth flow. In the case of Metronome, it’s done to enforce the rhythm of the story (and the story’s very much focused on rhythm). There’s a danger of disempowering the reader. On the other hand, giving too much freedom risks losing the thread, and pulling them out of the narrative (which may be intentional, if you’re looking to instil a sense of disorientation or unease, but is generally bad news if you’re loking to tell a conventional, immersive story).
Type 3: non-linear Narrative
Given an infinite canvas, there’s an obvious temptation to “fork” the story, and head off in more than one direction – yep, it’s Scott McCloud again. In algorithmic terms, our story is no longer a linear sequence, but a “graph”, that is a series of “nodes” (panels) connected by “edges” (paths from one panel to another). Graphs of nodes can be represented visually as series of panels on a flat plane, although not all can do so without edges crossing over. McCloud’s early experiments often make the edges explicit (e.g. this), as guidelines between the panels, rather than the standard convention of putting panels right next to one another.
A graph, though, is just a data structure, and can be used to drive other modes of presentation too. There are precedents here in Hypercard, an early software tool for creating non-linear presentations (used heavily in development of the early puzzler game Myst, apparently), and in the paperback adventure books of the 1980’s. Two of the demo Electricomics (Sway, and Cabaret Amygdala) use non-linear storytelling, activated by different mechanisms. Sway “forks” the story by tilting the character into the past or future, whereas Cabaret Amygdala uses hyperlinked objects (phones, notebooks, TV screens) as gateways to different pages. Visually, both of these organise the story visually in conventional page-sized chunks – perhaps a wise move to help ground the fledgling electricomic reader in a bit of familiarity?
Remember, the reader, as well as the creators, are very much figuring out this new medium right now. Look at how film storytelling has continued to adapt to an increasingly sophisticated audience, with once-innovative techniques like Nic Roeg’s cut-up rapid edits being absorbed into the mainstream storytelling repertoire.
Looking outside comics, we won’t find non-linear narratives in movies, but they’re a staple in games, of course. Some enhanced comics look/play/whatever?? very like games (e.g. Daniel Merl Goodbrey’s “Duck has an Adventure”). Once I’d figured out how to read Sway and Cabaret Amygdala, I frequently felt a bit disoriented, and concerned that I wasn’t going to find all the ways through the story. I don’t play computer games – maybe if I did, I’d feel more at home here?
At the same time as these kinds of electricomics are currently moving closer to games, there is a growing sub-group of games that are moving closer to comics – typically independent, or small budget, with a greater emphasis on narrative, drama and suspense, and less on action (here are a couple of random examples that I’ve heard good things about). These games still tend to rely on more fluid animation than you’d find in comics, but are encountering similar issues in straddling the divide between puzzle and immersive narrative. As a storyteller, we want our audience to be immersed within the story. As a puzzle-maker, we need to awaken the critical thinking & logic centres that immersive storytelling needs to suppress. I certainly felt the tension while reading Cabaret Amygdala and Sway – with my attention on figuring out the underlying graph of the story, I don’t think I really empathised with the key characters at any point. But, as I said, maybe I’m just showing my age there…
Impact on Storytelling: In short – Immense 🙂
As with any good three-point summary, there’s a lot that doesn’t fit in the three categories. Here’s a few more thoughts I’d like to get out of my head and onto the internet…
There’s a mature discipline of User Experience testing (UX for short) in software development these days, the findings of which may be useful to Electricomics creators. One key term from UX is “Discoverability” – in the absence of instructions (either because there aren’t any, or because the user didn’t bother to read them), how easy is it to figure out how to use something? Rules for good discoverability differ greatly depending on how the user uses the tool. Complex, powerful tools of the trade will be expected to be difficult, require training, and time to learn. Apps designed for intermittent or occasional use, such as Dictionaries, calculators and notepads, ought to be intuitive, and require no training to master. Typically, an Electricomic falls into the second category, and it should be obvious how to read it. Again, that’s going to depend on the creators’ intent, and whether they want to immerse the reader in a story, or challenge them with a puzzle.
Many ipad/android applications will adjust to either portrait or landscape mode as the tablet is rotated. Android apps in particular need to work on a variety of screen sizes and shapes (well, different kinds of rectangles – star-shaped touchscreens haven’t caught on for some reason). The Electricomics demo has wisely side-stepped much of this for it’s initial release, by limiting distribution to the iPad, and also by restricting each comic to either landscape or portrait viewing. Within these constraints, it’s possible to position panels on the screen with pixel-perfect accuracy, giving the designer easy control over the look, proportions and layout of the story.
Early graphical user interface design tended to position buttons, text boxes etc. pixel-perfect, whereas most modern user interface systems position elements relative to one another, to accommodate a variety of form factors. With a typical user interface, utility is king. Electricomics are a utilitarian user interface and a Work Of Great Art all at once, so the hurdles to adapting to different form factors are significant, but I’d expect some capability to resize dynamically to become part of the toolkit at some point in the future. The hurdles are there, but not unsolvable.
With the rise of mobile devices, touch screens, etc. the practice of “Responsive Design” has been a hot topic in recent years. This goes further than simply shrinking or growing the elements to fit the layout, to actually switching between layouts to match the form factor. Hence an email app might show list and details side by side on a big screen (e.g. tablet), and one or the other as tabs on a smaller screen (e.g. phone).
Could an electricomic take advantage of responsive design? From a purely utilitarian viewpoint, a panel-by-panel guided view such as the digital version of Talbot’s Metronome might be a good fit to the smaller form factors, whereas a grid layout would work better on a bigger screen. But could responsive design also be used artistically? Imagine a story that could be read portrait or landscape, in which tilting the screen to portrait mode revealed a wider viewpoint, and details/clues that altered the reader’s perception as to characters’ motives etc.
On The Fly
There’s one final bit of crystal-ball gazing that I want to indulge in, based on the history of the web. Web site design, from one perspective, had three main stages:
- in the early days of the web, pages were composed of static words and images, which were authored, then uploaded to a web server.
- the content of a web site was held in a database, and assembled into a page of content on request. Most often, images were still retrieved wholesale from a file, but could be created on the fly by a computer program
This third model seems to me to be the most interesting path for Electricomics to take. A non-linear narrative could contain a program that generates the story graph in response to a user’s actions, rather than being limited to predetermined forks and “jumping off” points. (If this sounds very much like a weird way of describing a computer game, then it’s because it is!)
Using modern web technologies, images can be composited and rendered on the fly. What are the storytelling possibilities here, while sticking within the panel-based constraints of a comic? We could:
- change the tone, colouring of a panel based on some story-based criteria
- make panels longer, taller or shorter, if the background can be generated dynamically
- certain characters, props or other visual elements that only appear at certain times of day. How about a ghost story that injects extra scary elements to the plot only after it’s got dark outside?! Or only on Christmas Day/Halloween/etc.
- multi-player stories that change depending on how many other people are reading them, and what they’ve done
- a chase scene with different outcomes depending how quickly you swipe from panel to panel
Computer-generated panel layout will be a challenge to artists. Some new freedoms appear, other restrictions will apply too. It is feasible, as proven by early experiments such as Grg Borenstein’s “Generated Detective”, which uses face detection on random images to work out where to place speech balloons.
It’s going to be a wild and interesting ride. And hopefully not all like the one I’ve described above!