audio development

The killer feature of willshake’s audio player is “cues” that tell where you are, line-by-line at each point in the recording. Those things don’t just write themselves. I built special tools into the site for creating these.

1 note from original

1.1 DONE automatically renormalize

This is mostly a contention issue. The post cue service writes directly to the cue source file, and since the adjustments may come in rapidly, I use an append-only method. But that leaves the cue file in a denormalized state. This sounds like a good case for a queue, since you can dump the new item on a queue quickly and then append/normalize in batchers. I don’t need the normalization in real time, I just need it to be automatic.

2 code

For audio cues.

@import colors
.scene
	.line
		&:hover
			cursor pointer
			background $highlightColor

	.just-cued.line
		background lighten(desaturate(green, 50%), 50%)
define(['jquery'], ($) => {

	const LEFT_MOUSE_BUTTON = 0;
	const RIGHT_MOUSE_BUTTON = 2;
	const TOGGLE_THRESHOLD_MS = 250;

	function body_mousedown(e) {
		if (e.button == RIGHT_MOUSE_BUTTON) {
			this.ws_mousedowntime = new Date().getTime();
		}
	}

	function body_mouseup(e) {
		if (!ws || !ws.audio || !ws.audio.media) {
			return;
		}

		var downTime;

		if (e.button == RIGHT_MOUSE_BUTTON) {
			if (this.ws_mousedowntime) {
				downTime = new Date().getTime() - this.ws_mousedowntime;

				if (downTime < TOGGLE_THRESHOLD_MS) {
					ws.audio.media[ws.audio.media.paused ? 'play' : 'pause']();
				} else {
					ws.audio.media.currentTime = ws.audio.media.currentTime - (8 * downTime / 1000);
					ws.audio.media.play();
				}

				this.ws_mousedowntime = null;
			}
		}
	}

	function body_contextmenu(e) {
		// Disabled while not doing audio cues.
		e.preventDefault();
	}

	function line_mousedown() {
		if (!ws || !ws.audio || !ws.audio.media) {
			return;
		}

		// Doesn't really need to be stored per element, but whatever.
		this.ws_mousedowntime = new Date().getTime();
		this.ws_mousedownmediatime = ws.audio.media.currentTime;
	}

	function line_mouseup(e) {
		e.cancelBubble = true;
		e.stopPropagation();

		if (!ws || !ws.audio || !ws.audio.media) {
			return;
		}

		var newCueTime = ws.audio.media.currentTime,
			delta = 0,
			anchor = $(this).prev('.a').attr('id'),
			existingCue,
			index = ws.audio.index;

		console.assert(!!anchor);

		// Offset cue time by duration of click
		if (this.ws_mousedowntime && this.ws_mousedownmediatime) {
			delta = (new Date().getTime() - this.ws_mousedowntime) / 1000;
			newCueTime = this.ws_mousedownmediatime - delta;

			// If cue already exists, offset its time (instead of current)
			if (index) {
				existingCue = index.cues[anchor];
				if (existingCue) {

					// Offset backwards for left click
					if (e.button == LEFT_MOUSE_BUTTON) {
						delta *= -1;
					}

					newCueTime = existingCue.at + delta;

					// Also, update the loaded cue in-place
					existingCue.at = newCueTime;
				}
			}
			console.log("dev: audio: offsetting", anchor, "by", delta);
			delete this.ws_mousedowntime;
			delete this.ws_mousedownmediatime;
		}

		var $this = $(this);

		var postData = {
			time: newCueTime,
			play: $this.closest('[data-play]').attr('data-play'),
			section: $this.closest('[data-section]').attr('data-section'),
			anchor: $this.prev()[0].id
		};
		console.log("dev: audio: postData = ", postData);
		$.post('/dev/post-cue?' + $.param(postData))
			.done(function() {
				$this.addClass('just-cued');
			});
	}

	function register_audio_cue_tools() {
		$('.open.play-section .scene')
			.offon('mousedown', '.line', line_mousedown)
			.offon('mouseup', '.line', line_mouseup);

		// These don't need to be re-bound every time, and they aren't unbound
		// when we leave a scene.  But we don't want this behavior outside of
		// the scene, so this is closer to the intent.
		$(document)
			.offon('mousedown', body_mousedown)
			.offon('mouseup', body_mouseup)
			.offon('contextmenu', body_contextmenu);
	}

	ws.on_visit_scene(register_audio_cue_tools);
});

3 roadmap

The first full play took about six hours. The second took about five. The third took four and a half. I’d like to get that down further. Here are the problems I see.

3.1 adding section cues

I’m still doing this manually. I think the normalize needs to add a separate section-only cue for the first cue it finds for a given section. What I don’t want to do is muck with the interface to handle these the way I handle lines—it would be more trouble than it was worth.

3.2 adjustment mode?

All in all, the cue controls are pretty good. I’m not sure how much I could improve them. But I’ve started a somewhat different process, of ensuring (to the extent possible) that cues do not cut off the start of line; in other words, that by cueing to a line directly, you will get the full start of the word that begins the line. (Whereas in the first round I was only concentrating on the timing of the movement, and for Ado, which is mostly prose, that’s really the only way to proceed.)

So I sometimes end up jumping back and forth between slight adjustments and playback. It would be helpful in that “mode” to automatically play back after an adjustment, although I still often want to make an adjustment without altering playback.

3.3 fast forward

Didn’t I already do this?

I have a rewind on right mouse, but no forward. I wouldn’t use it frequently, but I would use it sometimes. If I could hold shift and change the direction, that would save me listening through some sections.

3.4 enable new cues immediately?

This is one is too much trouble, I think. The cues are indexed at load time, and there’s a deal of overhead, not to mention refactoring, in redoing that.

about willshake

Project “willshake” is an ongoing effort to bring the beauty and pleasure of Shakespeare to new media.

Please report problems on the issue tracker. For anything else, public@gavinpc.com

Willshake is an experiment in literate programming—not because it’s about literature, but because the program is written for a human audience.

Following is a visualization of the system. Each circle represents a document that is responsible for some part of the system. You can open the documents by touching the circles.

Starting with the project philosophy as a foundation, the layers are built up (or down, as it were): the programming system, the platform, the framework, the features, and so on. Everything that you see in the site is put there by these documents—even this message.

Again, this is an experiment. The documents contain a lot of “thinking out loud” and a lot of old thinking. The goal is not to make it perfect, but to maintain a reflective process that supports its own evolution.

graph of the program

about

Shakespeare

An edition of the plays and poems of Shakespeare.

the works