» About     » Archive     » Submit     » Authors     » Search     » Random     » Specials     » Statistics     » Forum     » RSS Feed     Updates Daily

No. 3697: Subbly Nugmare Garfield

First | Previous | 2019-07-03 | Next | Latest

Subbly Nugmare Garfield

First | Previous | 2019-07-03 | Next | Latest

Permanent URL: https://mezzacotta.net/garfield/?comic=3697

Strip by: wye

Jon: Garfield, you ever.
Garfield: I can't own happy thoughts in the way to "Garfield's Color Snout".]
{Garfield becomes smoke}
[SFX]: ZZZZZ-ZZZZ
Garfield: {reappearing} The Hause is on a custay...
Garfield: Fold about too.

The author writes:

tl;dr: This was built from a transcript generated by a neural network trained on SRoMG strips.

The inspiration for this one was the random transcript generator featured in strips #3138 and #3139. I wanted to try making my own transcripts, and thought this would be a nice occasion to get my feet wet in the world of neural networks.

For the uninitiated: a neural network takes an input and does all kinds of interconnected computations to it, and with training data, you can make it learn the right kinds of connections to produce a useful output. In this case, it's a recurrent neural network (I used torch-rnn), the output is text, and the training data is the transcripts from the first 3565 strips of SRoMG. I also trained it (separately) on strip titles and authors' notes. The former is where the title of this strip comes from.

Compared to simple processes like Markov chains, neural networks can do a much better job at replicating the structure of the original text. For instance, these networks "learned" that a transcript consists of multiple short lines, often starting with an actor name followed by a colon, that opening braces should later be closed, and that all author notes end with a list of original strips and dates in YYYY-MM-DD format - all of which Markov models could never hope to reproduce. (It even knows when to use "strips" instead of "strip"!)*

However, the output is... still not great. That's probably because the dataset is rather small, and/or because I don't know how to choose the optimal network size. I had kind of hoped it would give me dozens of human-quality transcripts, notes and titles, but in the end most of them had too many gibberish words, uninterpretable actions and implausible formatting, and I was lucky to have found one or two usable ones each. It seemed to have particular trouble writing titles. The strip I ended up making isn't actually all that wacky, aside from maybe the middle panel.

As mentioned, I also trained the network on authors' notes, so here's what it had to say about its creation (a random sample, not actually connected to this strip):

I decided to fly watch expanding on another comic, we made this comic on Garfield). I could not have to take these dialogue in my fability, is becoming a bus fringen proof in the context of the editors. I could've used a process he's never month of its called Lyman of character". Entered the resulting factors, utvertainments below't variety of the most commashugahal order.

it's "Style of the Emported Repayzence comic based on a narrotle of Courier version, ago by the numbers is largely? Eids that a similar strip were above. Unternalikey.

[[Original strip: 2015-11-06.

I can't decide if this is more or less enlightening than some of the shorter comments it produced (such as "I made this strip made from death" and "Initially I made this").

*[I'm disappointed it didn't learn that stage directions like "{reappearing}" should go before the colon separating actor name from speech. Probably because the training data isn't consistent on that matter... -Ed]]]

Original strips: 1996-01-30, 1996-03-15, 1996-06-19, 1997-03-07, 1997-04-28, 1999-07-03, 1999-08-30, 2001-05-07, 2003-04-18, 2005-01-12, 2005-07-21, 2006-10-28, 2006-12-04, 2008-08-02, 2014-02-05, 2014-07-16.