Vegan Pancakes!

Friday, January 16, 2009

In response to Alecia's post (*ahem*: my new best friend's post) on our conversation about her project, I'll just jot down some of my mental notes.

First off, the problem: to make the maps of pollutant release data from NPRI (or any other maps, really) accessible. More specifically, to make the NPRI data as useful for non-sighted users as for sighted users -- and this means making some of the implicit information a sighted person gets from a map accessible to a non-sighted user. As I understand it, since the maps are displayed as images, then the (only?) accessibility mechanism that Alecia can use is the longdesc attribute of the image. So, the problem is partly one of coming up with a meaningful textual description of a map. A few other constraints: the web interface ought to be unified: the same for both sighted and non-sighted users; to be meaningful the text description can't be overly long (i.e. it ought to only contain relevant information, whatever that means. see below:).
  • "...implicit information...": we talked a bit about what this means, and it seems like it means just about anything you care to say is in the map. So, the clustering of smoke stacks around a major waterway is implicit because just by looking at the map you can pick out the cluster -- even though the cluster isn't represented as a cluster in the NPRI data. Um, even things like 'major waterway' is implicit, in a way, because it's not part of the NPRI data but it does appear on one of the other layers of the map. The amount of implicit information is practically, well, infinity+1, so trying to include all of it in a longdesc is not an option. So what to include then? Hrm.. accessible cartography's own frame problem.

    The scenario I think we're both assuming here is one of a Google maps type map which shows all sorts of terrain details, street details, etc..., and then overlayed on that is the NPRI data. A sighted person can ask and answer all sorts of queries on her own, just by looking at the map and visually sifting through the implicit data.

    One way to constrain the amount of information to put in the longdesc we discussed is to have a UI where users can express what they want to find explicitly. Once they state that the system gives 'em an appropriate map. In this way, whether they're sighted or not, we know enough about what not to put into the description and what to include, so as to only include relevant features.
  • Subproblem 1: So, if we run with the "task-oriented" UI that elicits the user's goal, then we have to design it well. There are probably another infinity+1 user questions, but knowing maybe they fall roughly into a few classes that the UI could be specially designed for (e.g. questions about locality: "what waterways are closest (5,10,25,.. Km) to a particular smoke stack"). User study, or mining the NPRI advanced query tool, maybe?
  • Subproblem 2: "...a meaningful textual description of a map...": Okay, once you have the relevant aspects of the map in hand, then you have the problem of turning into a sensible text description. I'd think there was work done on this already....
  • "...the web interface ought to be unified...": I really appreciate this point. I didn't at first, since it seemed intuitive to just split things up. One question I have for Alecia is whether she thinks it's okay to sneak in some features that are particularly useful for non-sighted users? For instance, maybe the default map displays all sorts of layers that aren't included in the longdesc. It's only when the user selects these layers to be, say, highlighted in someway, that they're actually included in the textual description. Is that fair game?
  • Exploration: one thing I thought about after our meeting was about making the accessible version of the map open to exploration. As we were talking I immediately zoomed in on the task-oriented UI as a way to narrow down the map detail, but maybe a user isn't coming to the site with a specific task in mind, maybe they just want to explore the data in a less structured way... I don't know what this means exactly, but I guess I'm thinking of how sometimes patterns just pop out of a picture (e.g. clusters) when you're not looking for them. Thinking about how to include exploration into accessible maps might be out of scope for Alecia's project, but it sure sounds interesting.

3 comments:

Leash said...

Thanks for the thoughts and feedback Jon. I'm wondering if I should put the implicit information on the backburner for now, and just deal with conveying the explicit data in a text format that makes sense, some sort of starting point to build on.

I'm also not sold on a task-oriented design...wouldn't that just be another text-based search? I'm intrigued by your last paragraph, open exploration. That's basically how sighted users peruse a map isn't it?

Jay said...

Lies! This post had no pancakes! A hundred lolcats are weeping now.

Jon said...

I'm also not sold on a task-oriented design...I'm intrigued by your last paragraph, open exploration. That's basically how sighted users peruse a map isn't it?

Task-oriented design was just my straight-forward way of getting users to give you hints about they want to see -- so you know which implicit bits of information to make explicit. There's no reason that this sort of design precludes exploration though. I imagine there would be many many knobs to turn on a task-oriented map, and I could explore the data by a willy-nilly turning of those knobs.

Post a Comment