April 16, 2019No Comments

Mid-Thesis Journal

handwritten notes and sketches for the consistent doodles structure

Feedback from Anne Goodfriend, Su Ayun Kim and Uttam Gandhi

  • User need is clear but hard to identify what the product could solve, maybe explaining it through a user journey would be very helpful.
  • How can ML provide recommendation than my trained eyes, can ML give me a detailed reason for why some recommendations are given. Like stroke weight, pixel size, etc
  • I think you downplayed the significance of what you are making
  • Be clear on what’s presented on each slide
  • It’d help to use smaller language
user testing during Quick & Dirty Show
partially done wireframe of the consistent doodles

Quick & Dirty Show

Although the overall wireframe and prototype were still undone, I decided to make simple prototype that people can interact with and conducted user testing. The general style other than its functionality followed the original Noun Project site, thus I'm only including its wireframe and handwritten sketches that focuses on navigation. Its style will be eventually altered throughout the process. Here are some feedbacks I got:

  • The transition between "Icon Select" and "Start a Set" was unclear to users--needs clear visual queue
  • suggestion: Look into Google's Search by Image thumbnail
  • Maybe needs some info/description about "Start a Set"?
  • The t-SNE map is interesting and helps to understand the system

Next

  • Audiences were overall confused about how it functions, and why it can be useful. It will be helpful to add solid description on landing page, or through question mark button.
    • Give actual example of "set/related recommendation" on grid, overlap them...etc
    • show t-SNE map
  • Make the transition between "Icon Select" and "Start a Set" clear
    • Google Search by Image
    • or other Reverse Image Search examples
  • Finish the Search by Collections side

March 5, 2019No Comments

Image Feature Extraction

Based on image-search.ipynb and image-tsne.ipynb

  • Solve the alpha channel issue for feature extraction; directly pull images from the API
  • "This is due to the fact that eps does not know about transparency and the default background color for the rasterization was (0, 0, 0, 0) (black which is fully transparent)." --- continuing error in analysis + expanding features
def load_image(path):
img = Image.open(path)
img = scipy.misc.imresize(np.array(img), (224, 224), interp='bicubic')
img[:,:,0] = 255.0-img[:,:,3]
img[:,:,1] = 255.0-img[:,:,3]
img[:,:,2] = 255.0-img[:,:,3]
img = img[:,:,:3]
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
return img, x
<ipython-input-22-6fd16072f6c2> in load_image(path)
8 img = Image.open(path)
9 img = scipy.misc.imresize(np.array(img), (224, 224), interp='bicubic')
---> 10 img[:,:,0] = 255.0-img[:,:,3]
11 img[:,:,1] = 255.0-img[:,:,3]
12 img[:,:,2] = 255.0-img[:,:,3]

IndexError: index 3 is out of bounds for axis 2 with size 3
  • Test with more than thousands images (max 50 per a call)
auth = OAuth1("API KEY", "API KEY2")
endpoint = "https://api.thenounproject.com/icons/{term}?page=2"
response = requests.get(endpoint, auth=auth)
with open('./bicycle-p2.json', 'w') as results_file:
json.dump(response.json(), results_file)

February 26, 2019No Comments

Initial User Research

To understand the users of the Noun Project or similar icon search engines (aggregators), I came up with few questions and scenarios. Filtering out those who have experience with using an icon aggregator resulted a group of people who are in art/design industry. With the selected people, I proceeded the following questions and scenarios.

Questions

Q. When you’re using the Noun Project or similar icon search engines, how many icons are you mostly looking for?

  • 82% answered they often look for more than a single icon
  • 18% answered they look for just one icon

Q. If you are looking for multiple icons, do you prefer them to be created by one person/designer?

  • 91% answered (Y) they prefer their icons to be created by one person/designer
  • 9% answered (N) they don't particularly prefer icons to be created by one person/designer

Q. Please briefly describe why you picked Y/N for the previous question.

  • Y: consistency, same/similar style, overall accordance, uniformity, common theme, "already have a specific icon in my mind"
  • N: reference, "not for the right-away use"

Scenarios

S. You are looking for 2 icons: a dog and a cat. You found a dog icon that you like, but the creator’s collection only has icons of a dog. What would be your action?

three icons of dogs in similar style and one question mark
  • modify one of the icons from the collection so that it can look a bit like a cat
  • I'd look for a cat icon from someone else that looks as similar as the dog icon I found
  • I'd find a collection that has both dog and cat
  • Give up the cat (if i have the budget, recruit a cat icon designer) ...

S. You are designing a website, and need 2 types of icons: a cake and a cat. Do you think this set of icons are good to be used together? Why?

a cake icon and a cat icon with similar style
  • yes, they look same in style. Maybe the cat is too big?
  • Yes. The two icons have used same background color and a line stroke.
  • Yes, both in simple lines and fluffy.
  • No, the porportion could be better (the cake smaller than the cat or the cat as face only) ...

S. You are designing an app, and need 4 types of icons: flower, bicycle, cake and dog. Do you think this set of icons are good to be used together? Why?

flower, bicycle, cake and dog icons. bicycle icon has different weight
  • No, the bicycle icon is not designed like the others
  • the bicycle looks too random with other ones
  • No. The bike icon's line stroke is too thin compare to the other three.
  • No, that bicycle looks off (less noticeable) with other icons ...

Next

  1. have more realistic/practical set of icons (such as doc, image, folder..etc that are often seen or will be used in product design) so the user scenarios can be smoothly understood
  2. make sure this is not about validating icon’s meaning
  3. focus on the main users of the Noun Project or of similar services (not the viewers of their final products)
  4. main topic moving away from perception of pictograms, but more towards expanding the possibility of collaboration inside an icon aggregator and improve its search experience

February 19, 2019No Comments

Context/Research Summary

Influential precursor projects, products, research, installations, performances and/or other influencers within which my thesis project sits.

  • Describe each project specifically, include images
  • Explain how your thesis project has been inspired/informed by each 
  • Explain how your project will build on each 
  • Explain how your project will be different from each

The Noun Project

“Noun Project.” The Noun Project, thenounproject.com/.

the Noun Project screenshot

The Noun Project is a website that aggregates and catalogs pictograms created and uploaded by graphic designers around the world, with the largest dataset. My project will base on its API, because I found their library most diverse yet has simplest forms. The website uses search by tag, and that is something I'll try to make difference. Link to related thread: https://www.quora.com/What-is-the-best-icon-library

Modern Pictograms for Lottie

“Modern Pictograms for Lottie.” Airbnb Design, airbnb.design/modern-pictograms-for-lottie/.

Gif from Modern Pictograms for Lottie

Salih Abdul-Karim, a motion designer at Airbnb, experiments various ways to create animation friendly icons -- or “artworks”, according to his description. He gives an idea what are the basic components of an icon, and further, what does the Noun Project's community aims to be, which is a collaboration between different people and overlapping practices. My project will be irrelevant from animating icon, but the article is inspiring in a way that it brings up the nature of the Noun Project's community.

Brandmark Logo Maker

“Brandmark - the Smart Logo Maker.” Brandmark Logo Maker - the Most Advanced AI Logo Design Tool, brandmark.io/intro/.

Brandmark - the Smart Logo Maker screen shot

Brandmark uses deep learning tools to generate logos composed of an icon, typography and color scheme. It uses a convolutional net to filter out common symbols and shapes that are not "brandable", by giving legibility score and uniqueness score. Technically, it's exactly what I attempt to achieve -- except that I have different approach on sorting icons, as I'm not interested in making a single icon standing out as a unique logo.

February 19, 2019No Comments

Production Schedule/Implementation Plan

Until Feb 15concept development, brief technical testing research
Feb 16 ~ March 5extract visual patterns from the Noun Project, user test of the 'audiences' who view iconsresearch, wireframes, user flow, coding
March 6 ~ April 16visualize the analysis, start designing the prototype of interface based on the previous results, user testing of the 'producers' who search and include icons in their workhigh-fidelity prototype, usability test, coding
April 17 ~ May 6finish prototype design and make presentationhigh-fidelity prototype, presentation prep

February 11, 2019No Comments

Concept Development 2

This week was much about narrowing down my topic and looking for available implements. During last week's group meeting, I proposed that my focus will lean towards pictogram for following reasons:

  1. Alphabets vary enormously in characteristics such as their typography and structure. I've done a project about my bilingualism, and the biggest challenge for expanding the project was my ignorance about other types of alphabets. Even a single alphabet has its own complicated system (often deeply connected to its culture), and I would like approach my thesis in the point of visual perception, rather than focusing on a specific alphabet or a culture.
  2. Similar from the first reason, I would like to pick a language that is more universal and primitive.

Pictogram

"A pictogram, also called a pictogramme, pictograph, or simply picto, and in computer usage an icon, is an ideogram that conveys its meaning through its pictorial resemblance to a physical object. Pictographs are often used in writing and graphic systems in which the characters are to a considerable extent pictorial in appearance."

Google Material Design Icons
Google Material Design Icons

Doodles, Pictograms, and Letters

Language is the system of signs. It is “a storehouse filled by the members of a given community through their active use of speaking, a grammatical system that has a potential existence in each brain, or, more specifically, in the brains of a group of individuals" – Ferdinand de Saussure 13-14 in Vidra-Mitra, 2017

Cave paintings in Magura Cave, Bulgaria. Photo by flickr.com
Magura Cave, Bulgaria. Photo by flickr.com

A pictogram is somewhere between a primitive drawing and a letter; not only in their form, but also in their historical order. It's more systematic than a doodle, yet requires less abstraction and training than a letter. It's is a doodle, yet a "consistent doodle".

ISO 7001 (public information symbols)

"ISO 7001 ('public information symbols') is a standard published by the International Organization for Standardization that defines a set of pictograms and symbols for public information."

Japanese green exit sign with Running Man moving to the left through a doorway

For example, the international exit sign, "Running Man", was introduced into the 1987 standard as a consistent and international approach to move away from using words in the native language, after the Sennichi Department Store Building fire.

Consistency and System

Ferdinand de Saussure claims language is relational system of signs. One of the problems about a pictogram is that it's much diverse in style and interpretation of what it signifies.

two cat icons from the noun project

These two icons are both from searching "cat" in the Noun Project. The Noun Project is the biggest platform that shares such pictograms, created and uploaded by graphic designers around the world.

A pictogram can be a "doodle" than a "language", by lack of consistency.. Although applying regulation with a single standard as ISO 7001 is the simplest answer, it's not always an available option -- especially regarding that a pictogram is ultimately another form of creative expression. Also, they're rather recommended to be different in their level of details, scale or weight, depending on their purposes and environments. However, at least inside one system (i.e. a mobile application), they have to align together; else they become mere doodles.

When people open a website or enter into a building, they're entering into a new system. Something that can improve and build the legibility of pictograms is the consistency within a system, instead of the consistency over all the systems (like ISO 7001). It's much more achievable and friendly approach for keeping pictograms as a language, without destroying their diversity. It will also give a nice observation of how people visually perceive things and interpret them.

Notes

  • The Noun Project API will be the source of my thesis
  • Items should be collected as /icons/{term} than /icons/{collection} (already formed collection)
  • maximum number limit of call is 50? (reach out to the Noun Project)
  • let's not use creative commons, but public works only
  • Possibilities of application: better way of sorting icons compared to tag-base categorization, draw and search icon, change icons altogether in similar style

December 15, 2017No Comments

Time: process & results

(Photos by Nicolas Peña-Escarpentier)

What's time? A project “Time” started from throwing simple questions about time and its standard - eventually evolving into the combination of physical piece and digital visualization about different time zones. The physical piece is consisted of two parts: inner cylinder that represents UTC Time Zone map, and outer sphere that is rotatable and contains light source. As the user rotates the outer sphere, and it creates movement in light. While the light is moving, the inner cylinder that has 24 light sensors for each longitude reacts to the changing brightness.

The physical piece will be installed on transparent and round table that I personally own, which approximately has 40” diameter and 30” height. People should be able to walk around the table and see the cylinder time zone map inside the sphere. The digital visualization will be projected from underneath the table; due to its transparency, the table is able to directly show the visualization on is top surface. In this way, people can view both physical and digital piece without being distracted.

When the light shifts from one time zone to another, the digital visualization reflects the movement as well by changing gallery of skylines. Those skylines are from survey towards ITP community regarding which cities they came from, and organized according to the longitude location.

All the coding parts can be found in my or Huiyi's GitHub repository.

Process:


The skyline photos are are collected in a single folder and organized in each city with same 24 steps. The archive is called via .json file.


24 light sensors send array of numbers. During this process, Huiyi and I had to add black tubes around the sensors to block the ambient light. Usually when the light source is not close enough, the reading is under 10~15. When the light source is in front of the sensor, it gives value between 50~100. Using that, if there's enough difference between max and min, it will indicate max as 12:00PM.

What happens if the difference between max and min is too small - such as, when the light is off because I put the switch on the light source without thinking everyone's going to press it? Well, it basically makes the whole sketch goes to "sleep." This won't and shouldn't happen in real life, but since our work is not literally showing the scientific information, we rather found it will be a good visual effect that wraps up the whole idea.

Second User Testing:

In order to give a general idea of how it functions, I brought the partially completed physical piece and the sketch that works with slider input. Although Huiyi and I were in the finishing stage at this point, the user testing was helpful to make small fixes. The most mainstream feedback I got was that it's hard to recognize the light location inside the sketch.



After the user testing, Huiyi and I realized the need of light indicator and initially built a white line that goes across the sketch and directs to the max input - which is the version I presented in the last ICM class. Then later it became a line along with orange gradient to make the light change more noticeable and dramatic. Furthermore, the text color for max input zone will be turn into yellow as well.

Results:

This is the second short video that contains some process and imagery of how it works. There will be more documentation and modification until the show.

View