Ideas for future development

Here are some ideas for future development of Vignette. These have been floating around in my head and in my notebook for a while, and it was time that they got written down. Elaboration and blog posts on implementation of some of these features are to come.

  • Conversation is important, and a (significant) step up from conversation with a chatbot is conversation with a real person. Creating a “photo-album” mode for video chatting that allows all users to flip back and forth through a jointly controlled photo album while talking and sharing stories. Optionally, the conversation could be recorded and associated with particular photos to create a kind of oral narrative to accompany viewing those photos in the future. You can almost think of this as similar to current online photo album sharing and communication tools (e.g. Facebook and Google) with the advantages of audibility and immediacy.
  • Photos are important traces of our lives, but so are all of the other digital artifacts we leave behind. In scrapbooking, one often uses boarding passes, ticket stubs, and other physical artifacts to tell the story. Those things are now stored on our phone – why not archive them into the photo library on the phone so that they can be viewed and remembered like photos?
  • Simple interactions can help people get more out of their photos. Trigger daily reflection on a mobile device at the end of the day when it detects a picture has been taken. The interaction could be brief and straightforward, with questions perhaps being similar to my Twitter bot, @SnapshotReflect.
  • Take the conversation engine from Vignette and make it spoken, with text-to-speech and speech-to-text.
  • Use the spoken dialog to create better video slideshows, that involve the user telling their own story.
  • New output formats from the written story in Vignette, including low-cost printed pamphlets, video slideshows, interactive websites, and a “journal mode.”
  • Suggesting stories within large photo archives by
    • Finding loops/journeys that end where they started after traveling a long distance
    • Finding the first time a person appears in a photo, or when someone appears after a long gap
    • Finding the first time a place appears in a photo, or when the user returns after a long gap
  • Physical computing interfaces that allow users to browse their photos on real pieces of paper, sort them, shuffle them, flip them over and write notes on them. The photos would be printed, or projected from a ceiling mounted project (e.g., like the Realtalk system at Dynamic land, though projects have a problem of resolution.) Notes and order would be detected by a camera, and everything would be used to organize the “real” digital collection.
  • More human mannerisms in chatbot dialog
    • Long pauses in attempts to encourage further elaboration like a real interviewer would
    • Typing notification (…) appearing and disappearing
  • More robotic mannerisms in chatbot dialog
    • Play up the fact that it’s a robot through font and visual design choices. This may make people more comfortable with errors and mistakes that the robot might make. Make the robot socially awkward, because it will be anyway.

Conversation in action

Previously, I discussed methods of clustering photos into geographically and temporally related groups, and some of the trade-offs involved in various clustering algorithms and parameters. Since then, I have arrived at a workable set of defaults, and created an interface that allows clusters to be merged and split. I have not yet built a tool for custom cluster editing, and the split tool does not allow the user to select a split point. Instead, it uses k-means clustering with k=2 to try to find the most logical point to split the cluster. This is a user interface issue more than a technical issue – I am not yet sure how to integrate a more customizable cluster creation tool into the application.