October 25, 2018No Comments

Talk to Makeup Guru

Everyday there is another set of new makeup products. Sometime it's about texture, and sometimes it's about brand itself - but most all, it's about color.

"For those of us who are obsessed with makeup — lipstick in particular — there's no such thing as too many of our favorite products. With our laser vision, we can tell the true difference between any two shades, no matter how similar they may look to the untrained eye." — Twitter Boyfriends Are Confused About These Two Obviously Very Different Worlds Apart Lip Colors, KARA NESVIG

Makeup Guru was designed to give "FRESH MAKEUP TUTORIAL: WITH BRAND NEW {color} EYESHADOW!" based on your favorite color at this moment. Original text is from one of the tutorials in Deck of Scarlet. The project is also a continuation of Makeup Guru from Reading and Writing Electronic Text class, but in more conversational form.

 

example of using Makeup Guru when you choose pink lavender color

Process

To generate entities and functions, I used xkcd color data from Corpora. Full code can be found here: Makeup Guru. The process is composed of (1) sorting hex to RGB (2) building word vectors based on RGB values (3) receiving user's favorite color and seeking nearest 6 colors (4) implementing colors to tutorial along with other elements.

example of using Makeup Guru when you choose one of green shades

example of using Makeup Guru when you choose pink gross green color

xkcd.json provides a wide variety of 952 colors, and is inclusive to such colors as poo, baby puke green, snot shade, diarrhea... etc. Of course, beauty is subjective (especially when it comes to an abstract component as color), so some people might want those shades in their makeup. Regardless of the terms, the Makeup Guru will go through the same process and give you the tutorial.

1. Hex to RGB

def hex_to_int(s):
s = s.lstrip("#")
return int(s[:2], 16), int(s[2:4], 16), int(s[4:6], 16)
colors = dict()

sorting.py returns the same xkcd colors along with RGB values as sorted.json, which looks like this:

{
"cloudy blue": [172, 194, 217],
"dark pastel green": [86, 174, 87],
"dust": [178, 153, 110],
"electric lime": [168, 255, 4],
"fresh green": [105, 216, 79],
...}

2. Color Vectors

The original codes I have are in python and p5.js (as separate sketches), which were relatively easier to create vector space. In order to adapt the same structure, I used vectors node module. var dist = require('vectors/dist')(3) (and it only operates with var) creates three dimensional vector space, which can be used for locating [red, green, blue] values.

function findNearest(v) {
var dist = require('vectors/dist')(3)
let keys = Object.keys(colorVector);
keys.sort((a, b) => {
let d1 = dist(v, colorVector[a]);
let d2 = dist(v, colorVector[b]);
return d1 - d2;
});
nearestColors.length = 0;
for (let i = 0; i < 7; i++){
nearestColors.push(keys[i]);
}}

fineNearest() seeks 7 nearest colors from the "favorite color" and results nearestColors - these 7 colors are used for generating the overall tutorial text.

3. Favorite Color and Nearest Colors

app.intent('Pick Color', (conv, params) => {
conv.data.favColor = params.xkcd;
pos = colorVector[conv.data.favColor]
findNearest(pos);
...
});

For example, if you picked "cocoa" as your favorite color, than it will be:

pos = colorVector["cocoa"]
findNearest([135, 95, 66])

4. Colors in Tutorial

conv.close(
`How exciting! To give you a bold look, I'm switching to a bigger blending brush.` +
` And I’ll use it to blend out the edges of that ` +
nearestColors[3] + ` shade into the crease and, you know, make it look really nice and seamless.` +
` And see how that eyeshadow kind of blended into more of like a ` +
nearestColors[4] + ` shade? When it’s fading, it looks more ` +
nearestColors[5] + `, rather than ` + nearestColors[6] + `.` +
` It is really nice. It looks like I used many different colors when I just used one single eyeshadow.`
);

Further Notes 💄

  • Sometime I think marketing operates in similar way as fortune teller: consumers give the "seed" of their story, and fortune tellers/industries rephrase and repackage the content that is given by the consumers. So you are actually offering them a hint to target you.
  • Throughout the tutorial, the Makeup guru talks about only one shadow product, in many different phrases.
  • node-word2vec might be interesting to use for future related projects.

October 17, 2018No Comments

Anatomy of an AI System

Anatomy of an AI System map

Anatomy of an AI System By Kate Crawford and Vladan Joler

  • designed to either “blend in or stand out”
  • “Because Alexa is in the cloud, she is always getting smarter and adding new features.”
    • hard to explain and understand the extraordinary complexity of these artificial intelligence agent
    • but do people care?
    • requires “a vast planetary network, fueled by the extraction of non-renewable materials, labor, and data”
    • reminds me Sandra
  • lithium extraction
    • Atacama regions in Chile and Argentina
    • ‘grey gold’
    • lithium-Ion batteries
    • limited lifespan
    • invisible threads of commerce, science, politics and power
  • three processes: material resources, human labor, and data
    • the ethereal metaphor of ‘the cloud’
    • it’s hard to ‘see’ any of these processes individually
  • human user = chimera (hybrid)
    • a consumer, a resource, a worker, and a product
    • aren’t we always a ‘chimera’ in real life as well?
    • helping to train the neural networks
    • ‘collective intelligence’?
  • The echo = an ‘ear’ in the home
  • statua citofonica (the ‘talking statue’)
    • “listening systems” = power, class, and secrecy

October 9, 2018No Comments

Dialogflow: Fulfillment and Logic

Sketchbook gif from Don't hug me I'm scared

Assignment 4 is a practice based on Codelabs (part 1 and part 2) for building Actions for the Google Assistant. It contains elements as Permission, Suggestion, and BasicCard along with usage of webhook.

It is a parody of Don't Hug me I'm Scared.

October 1, 2018No Comments

Sandra Podcast

Sandra Podcast Image

Sandra podcast

  1. episode 1 (Hope is a Mistake, 17:59)
  2. episode 2 (The User Experience, 21:51)

Sandra, the series of podcast episodes about this virtual assistant that is actually powered by an army of real people, reminded me back to one of the second week’s reading: The Difference followed by the author’s note. In the comments, many readers were debating either if this is a human being or a chatbot who is answering in the chatroom. My favorite comment was by Loki: “…But if he is human, than this is a great story to show how possible it is to program someone. Keep telling someone that they aren’t a person and they’ll eventually start to believe it themselves.”

Therefore, Sandra is not all about faking those sleek and mighty technology products but also about identity. Who’s containing who? In this podcast series, it seems like Helen is “inside” the Sandra machine - making Sandra as a mere container. Why can’t Sandra just sound like Helen, like a normal human? It also resonates well with the other podcast episode, Helpful Mom Voices, which talks about nowadays TTS doesn’t need to “appear” as human but it still attempts to bring high technology “down to human level.” In case of Sandra, it’s the opposite. It seems like as soon as the answer/reaction from the humans behind Sandra is transformed into the typical AI assistant voice, people seem to feel comfortable - even too much comfortable to ask improper and impolite queries.  It’s such an intriguing storyline as a podcast, but I hope that no one actually comes up with the same idea.

October 1, 2018No Comments

Dialogflow: Intents, Entities and Contexts

Assignment 3 that the user plays adventure game only through simple Q&A. It's built with Dialogflow but needs some revision due to my lack of understanding in concepts of entities and contexts.

It was inspired by text-based role-playing game such as Candy Box! and A Dark Room.

September 24, 2018No Comments

Voice-Controlled Game: One Hand Clapping

One Hand Clapping Image

"It seems like the invention of every new technology comes along with games." - Paul Cutsinger, Amazon

Game industry is where engineering, design and art merge together and create a virtual environment. There’s been numerous trial to develop new ways of interaction, either from storyline (i.e. metafiction) or from input and output methods. Many games are studied to understand interactive aspects between humans and machines, such as Black & White in UI research area.

The first time I played One Hand Clapping was through one of the Twitch streamers I often watch. Its singing input method and beautiful visual design was interesting enough to download it, and actually try it with my friend.

I have to mention that this game was rather hilarious to me when I merely watched the streamer playing it, because he is heavily tone-deaf and failed to sing even the simplest notes. It’s similar with how a person has to pronounce precisely in English for voice recognition to “recognize” the person correctly.

One Hand Clapping and the funny streamer reminded me how it’s still challenging to use such input as voice and motion, yet I found them as most humane and refreshing input methods at the same time. It will be also exciting to involve multiplayers, because it was the first thing my friend and I attempted to do with the game.

September 24, 2018No Comments

Voice Input and Snake Game

Snake game with voice recognition

Assignment 2 that takes voice input from a person to control in snake game (example by Prashant Gupta).

 

September 17, 2018No Comments

The Difference & Helpful Mom Voices

The Difference followed by the author’s notes

  • 2008-08-21 20:17:43 comment by Loki: "I disagree paradoxia...I mean everyone is entitled to their own opinion. But if he is human, than this is a great story to show how possible it is to program someone. Keep telling someone that they aren't a person and they'll eventually start to believe it themselves. Still, interesting writing all the same."
  • chatbot or human?

Helpful Mom Voices podcast episode from Reasonably Sound

  • Alexa, Siri.. etc
  • voice-over artists: Susan Bennett, Karen Jacobsen... etc
  • TTS (text to speech) technology
  • Character in digital assistance
  • Female — “Helpful Mom Voices”
  • bring in them “higher order of living”, rationality, intelligence, soul, spirituality
  • voice: expression of agency — becoming its own entity; but it’s not elevated to the status of rational human
  • before it was imitating human, and making show - now it doesn’t need to be “appearing” human. Now it’s logic operation.
  • bring high technology down to “human level”
  • history of operator in late 19th century
  • higher pitch = “more pleasant” = more memorable information = unless it has to do with “masculine” subjects like math
  • people have expectation and rating on voices
  • “symbolic gender”

- also reminds me of the Vocaloid culture & marketing in Japan.

 

September 17, 2018No Comments

Non-speech Input to Speech Synthesis

Week 1 assignment that changes the reading speed depends on the number of words in the sentence (smaller or bigger than 5). It would be nice to figure out how to directly map words-count, and further using paragraph to create speed and color values.

 

September 4, 2018No Comments

Schedule: Fall 2018

Sept ~ Nov/Dec '18

  • Big Screens
  • Drawing on Everything
  • Open Source Studio

Sept ~ Oct '18

  • Hello, Computer: Unconventional Uses of Voice Technology

Oct ~ Dec '18

  • Computational Approaches to Typography