From Visual to Language
Enrich users' visual inspiration by connecting and exposing verbal and logic language to visual contents. Showing more background information, resource or stories about the visual content, like the time, author, art movement and related events that we cannot see within our own vision and knowledge, by accessing metadata, or recognizing contents. It can be done by auto label or tag single image or a group of images.
Problem: Sometime when we are looking for inspiration we go to bechance, Pinterest or dribble, we input our keywords into search bar, the system returns us similar images or artworks, we find some we like, some we don’t, but they are limited on the visual contents and it is hard to describe in language what matches our preference. Our key words maybe not accurate. Can platforms help to accurate the tags or labels by knowing what we like from a single or a group of visual contents?
- Goto artwork platform
- Searching one or couple keywords
- Return images or artworks
- Label what we like or not
- Return tags or labels from the artwork we like in common
- User can either click the tags or labels to see more similar work or know more details about the artworks