Tagging tools: the key to content searchability

Sometimes, searching the web for information can be a masochistic exercise.
You know it’s there somewhere, but it remains just beyond reach. Then,
irritation of irritations, you discover a misspelling or wrong choice of words
stopping you reaching your search Nirvana.

Nothing new here but, with the advent of self-publishing and lazy
republishers, otherwise good stuff puts itself beyond reach because no-one has
bothered to tag it properly. You’d have thought that by now on-tap tools would
be helping people get their stuff right before it gets stored for posterity.

It seems not. JISC recently saw fit to fund some research into ways of making
publishing and retrieval more effective through enhanced tagging mechanisms.
Done properly, tagging can sidestep the problems associated with wrong word
choices and misspellings in the publication itself. But, without helpful tools,
the original author’s tags are in danger of perpetuating the problem.

And, of course, there’s the perennial issue of “why should I tag anyway?”
Most people do it for their own selfish reasons rather than the greater good, so
the volume needs to be there for it to approach effectiveness: they can’t all
misspell the same words or, indeed, use the same vocabulary. The exceptions are
the indexers who get paid for doing a professional job. But the amount of
material they can index is a rapidly shrinking proportion of the information
pouring onto the web.

The JISC-funded research project – Enhanced Tagging for Discovery or EnTag
for short – was led by UKOLN. It set out to discover what sort of assistance
worked best for readers and authors. It offered a blend of taxonomic and
folksonomic suggestions, with the ability to see how others chose to tag the
same material. It also looked at the impact of each on people’s ability to
retrieve information.

It struck me that a year-long academic project would be swiftly overtaken by
people in the real world of agile development. But either the real world is
hiding its light under a bushel or no one has anything sensible to offer yet,
not to the public at large anyway. (Ping me if you know otherwise.)

The project did at least give rise to two more funded projects. One, called
EASTER (Evaluating Automated Subject Tools for Enhancing Retrieval), aims to
find out what is out there and whether it’s any good. The other – PERTAINS
(Personalisation Tagging Interface Information in Services) – looks as if it
might even end up with a product/service prototype.

The trick will be to get the user interface right. The university researchers
on EnTag borrowed a couple of demonstration systems to enable the guinea pigs to
test their assertions about formal versus informal tagging and simple versus
advanced tools. When asked about their experiences, their answers were bound to
be coloured by the software’s ease of use.

I suspect the enhanced tagging system would have been more popular had it
been easier to use. But it doesn’t really matter. The report raises, and sort of
quantifies, all the key issues, even if it doesn’t point to any answers.

As a non-academic, working in the real world, I’d just like a system/service
on tap that knows me and my interests, can call on relevant contextual
vocabularies, can see how other people have tagged a particular resource and
will correct my spelling. And I’d like it all wrapped up in a clutter-free and
intuitive user interface. Oh, and it shouldn’t cost a fortune. Is this such an
unreasonable request?

Perhaps the natural place to look for such a solution is somewhere that’s
already storing all the web’s visible resources. Are you listening, Google?

Through our research and insights, we help bridge the gap between technology buyers and sellers.