Working notes for my mnemosyne project. Please excuse the dust and noise.

July 11, 2012

Basics are set up. Using node will force me to learn to write better javascript, plus make it easy to run both as a command-line program as well as a web app – I’ll be able to test out new functions with dummy files.

Already, the fact that node is asynchronous is causing problems. Tried to write files after reading them and realized they were happening in parallel, with the write finishing first. Threw up a lot of errors until I realized what was happening.

Learned about callback functions and got bad Scheme flashbacks. The time function has some fancy padding to add leading zeros to ‘0X’ and ‘00’ values.

Set up content templates – text, image, audio, video, link, quote – which are based on Tumblr. I’m going to also add one for tweets.

Looking at Storify for more ideas. Facebook, Flickr, Instagram – no use for Facebook, but image extraction from Flickr or Instagram urls may be a good idea.

Should also remember to do special handling for Youtube, Vimeo, and Pinboard.


July 20, 2012

Looked into Postgresql for future data storage, which looks to have good node support. I’m keeping things as flat files for now, but eventually content data needs to live in a database (with easy exporting of user data to JSON). I’m working on the html versions of the content, including the visual design for dragging and editing. I want to have a good idea of the markup structure for each content type before the translation code gets written.

Using jQuery UI for drag and resize functions – with some grid snapping, which saves me a lot of work. Current thinking is 5 columns of 200px each, with vertical units of 20px.

When the content objects get hooked up to the backend:


August 29, 2012

I couldn’t get my mind off Dan Hill’s post about dark matter, which made great testing grounds for the first Mnemosyne trial. Many of the pieces aren’t done yet, so I had to “simulate” the steps I would be taking in the fully-functional version. It felt a bit like running in water.

So: it works. The workflow above closely mirrors the most common user scenario: taking apart several pieces and putting them back together in a synthesized form.

Obviously it can also be used to collect items from many different sources (like multiple responses on Twitter). Storify handles this reasonably well, though the linearity of it restricts the kind of arrangement that’s possible here.

It helps to draw connections spatially instead of semantically, so they’re kind of fuzzy, until you figure out more precise ideas. So two things are next to each other, whether they’re saying the same thing, on different sides of an argument, or building off of one another.

I was surprised by how much playfulness and momentum comes through. This comes from learning to place items in a way that leads the eye, so that the reader is able to follow along one thread into the next. A certain rhythm comes through, from the sizes and positions that change from one thing to the next. (This was pretty intuitive to figure out, and the system is loose enough to let you figure out your own conventions.)

What I haven’t confirmed:

I see it primarily as a private tool, to be used to work out ideas before they become an essay (or whatever else). But I liked that at the end, @rogre suggested a bunch of things for me to add. @vruba had used Google Docs with a bunch of people (8+) one night to draft out a bunch of ideas in public, and while that was riotous fun, I want to build in more controlled points where someone can give you feedback or suggestions, at a time and place where you’re best able to draw on them.