Thus far, I’ve been attending sessions on Dspace. The program seems to be heading in cool directions, development wise. I was really excited to see Manakin–and the possibilities for images with that tool. With teaching images (non archival) I think we need to move beyond the VR pigeonhole and the limiting products available. I see a tool like Manakin enabling users to better control their searches, and to return visual sets. Manakin also makes it easier to “pipeline” data out of Dspace–great for interoperation. Tomorrow will be a longer presentation on Manakin, so that’s exciting (well, as excited as one can get over repositories.)
One thing that i keep seeing is that implementations of Dspace seem to spurn more services. How far do we go?
I keep thinking of this vision I have for a T&L tool for images–something that displays groups of images in hierarchies that seemingly match the hierarchies of subject level description. A visual way to return image results not unlike a tag cloud…Images that most closely match the search terms are more prominent. This can’t be all that far fetched. But we need to move beyond simple retrieval and export of images for inclusion in other software and keep them in the web environment, build the tools that keep them there–but also make learning there more reliant on the image as text as opposed to the text as context.
Another session talked about some of the advances in China, where a network of museums are allowing cross searching, and the development of grouping from this searching. They explored the model of the digital museum, where content is grouped according to context, rather than digital libraries that provide content by collection, item by item. These are things to think on with learning images as well…our students are already doing online exhibits as projects. Would it be a stretch to create groupings in research and for learning?