Awaken my child, and embrace the glory that is your birthright. Know that I am the Overmind; the eternal will of the Swarm, and that you have been created to serve me. ~The Overmind, StarCraft
In 1912, Carl Jung published Symbols of Transformation, a work in which he began detailed development on his idea of the collective unconscious, one of his many enduring additions to the field of psychology (and, in my opinion, one of the more ridiculous, as it tends to feel like nothing more than refined mysticism). The collective unconscious as described by Jung is actually a knotty little thing, as he was often rather ambiguous in his various descriptions of it, allowing for a wide range of interpretations and suggestions as to its true nature.
In his The Archetypes and the Collective Unconscious, Jung lays out his idea of the collective unconscious in the first few pages:
A more or less superficial layer of the unconscious is undoubtedly personal. I call it the “personal unconscious”. But this personal layer rests upon a deeper layer, which does not derive from personal experience and is not a personal acquisition but is inborn. This deeper layer I call the “collective unconscious”. I have chosen the term “collective” because this part of the unconscious is not individual but universal; in contrast to the personal psyche, it has contents and modes of behaviour that are more or less the same everywhere and in all individuals.
His collective unconscious was based less on an all-encompassing, eternal world consciousness as it was a series of psychic structures underlying all human experience- the archetypes (the Self, Anima/Animus, Shadow, etc). From these spring archetypal images (like the hero, common across all cultures and times) and events (such as marriage and initiations). For Jung, the collective unconscious and archetypes served as a kind of DNA of the psyche. Much as genetics determine our physical traits through a mere handful of nucleobases and amino acids, Jung believed that the collective unconscious shaped the individual psyche through a small number of archetypes.
That’s as far as we’re going to go with that because I find most aspects of the collective unconscious to be nonsensical (I have a love-hate relationship with psychology in general).
I bring it up as a foil for what I’m going to discuss next. For if we have a concept for the collective unconscious, surely we have to have one for a collective conscious as well.
Actually, aspects of the collective conscious appear in psychology as well, particularly in the idea of groupthink. A phenomenon arising within groups of people, it’s a problem solving method wherein group members attempt to reach a consensus without conflict or critical evaluation of alternative ideas/viewpoints. William H. Whyte called groupthink, “a rationalized conformity— an open, articulate philosophy which holds that group values are not only expedient but right and good as well.”
But groupthink doesn’t arise in all groups (…rather obviously). It’s most likely to occur when the group is comprised of members of a similar background, when the group is insulated from outside opinions, and when there are no clear rules for decision making.
I think my favorite example of groupthink is not one of the more obvious political ones, but rather the movie 12 Angry Men. I have a personal connection to this movie- story for another day, that. 11 of the 12 jurors in the case succumb to blind agreement that the defendant is guilty. Their inability to rationally look at the situation and consider alternative viewpoints makes them a strong example of groupthink (and a rather horrific look at the potential for blind judgments in the legal system).
Thankfully, smooth-talking Henry Fonda is there to turn the tables.
But the first thing that really pops into your mind when you think about the idea of a collective conscious isn’t some psychological phenomenon you read about in your Intro to Psych course that you took because your upperclassmen friends told you it was a blow-off class, it’s something that belongs in the realm of science fiction:
The hive mind.
The Zerg in Starcraft, the Geth and Rachni in Mass Effect, the Overlords in Childhood’s End, the Toclafane and Vashta Nerada (meep!) in Doctor Who, the Dark Ones in Metro 2033 (I haven’t actually played this game, so I’m taking the internet’s word on this- I’m including it because I just wanted to say that I was actually reading about this game the other day and really want to play it)… the list goes on and on.
Unnecessary, yet awesome, Magic the Gathering moment. Bask in it, dear galleons.
All of these species exhibit some form of hive or group mind. We are used to portrayals of hive minds wherein the individual members refer to themselves as “we” or “us,” denoting their lack of individuality. They are a collective- one mind in many bodies (or one memory shared between bodies or some variation thereupon), exhibiting a telepathic connection between individual units. Often controlled by a queen-type figure, the hive mind is a devastating creation. Because there are no individuals, there is no dissent. No alternative modes of thinking. No sudden spats of morality. No crippling love or guilt or other emotions.
It’s the Utopia Big Brother and Joseph Stalin both craved.
The thought of being part of a hive mind causes a cold shiver to run down my spine. I am a confusing, bizarre, nerdy, emotionally-retarded, introverted, sexually frustrated, abrasive, half-assed intellectual with a predilection for immature jokes, frequent cussing, rampant giggling, and making absurd associations. But whatever strange compound of personality flaws I am, the fact remains that it is me. An individual. And I wouldn’t trade that sense of self, that unique sensation of I, for anything.
I assume, galleons, that the same can be said of most of you.
So it isn’t surprising that my instinctive reaction when I first read about a “human hive mind” was one of horror. But if there’s one thing that has remained steady throughout my life, it’s my insatiable, morbid curiosity. Thus, I kept reading.
In the end, the article wasn’t really about a hive mind in the sense of the images we have from our science fiction favorites. Rather, it was about the power of crowdsourcing (a portmanteau of “crowd” and “outsourcing” that is basically summed up by its parts- outsourcing to a crowd of people) in increasing the power of AIs.
Which made me breathe a quiet sigh of relief, naturally.
The information was interesting, however, and I think the concepts of crowdsourcing and crowd wisdom are worth discussing, so that’s what we’re gonna do.
What exactly is the wisdom of the crowd?
Crowd wisdom is the process of taking in the collective opinion of a group of individuals rather than a single expert’s. Which sounds suspiciously similar to group minds and groupthink, doesn’t it?
The concepts are related in that we are looking at the power of the whole over the power of the one. The phrase “right reduces to might” has been popping into my mind at the oddest moments in the last two weeks, and I find this to be one of the situations where it actually fits. The might of the crowd’s opinions becomes what is considered truth.
If ever there was an argument for subjective truth in modern culture (I still feel historiography, the study of the shifting narrative of history, is the best one overall- maybe we’ll talk more about that in the coming weeks, because that’s an old favorite of mine that I don’t think I’ve really expanded upon here), the wisdom of the crowd would be it.
The internet has already started capitalizing on the wisdom of the crowd, as many of you have probably noticed. Crowd wisdom powers search engines like Google, which aggregates searches from across the globe. Have you ever wondered how Google’s search results are organized? Maybe you already realized that they are organized, in part, based on popularity- the more times users click a certain link in reference to a specific search term, the higher up the ratings that site climbs for that search term. Sort of. There’s a much more complicated algorithm at work, an algorithm they are constantly tweaking to prevent spammers from manipulating the system to land in the top results.
Then again, maybe they just use pigeons. Who knows.
What we do know is that the internet is changing. And it’s not a change we all immediately recognize, as most of us have been here through its gradual evolution. It’s only when you take a step back and really look at it that you start to see the incredible shift we’ve made from the simple organization-and-consumption-of-information model the internet has been running on. Now, we are looking at the age of user-generated content (created and shared by users) and social media, a strange new beast with a new set of rules.
Just what is so important about the overwhelming flood of social networking happening on places like the Facebook and Twitter? The strong socialization of the internet is turning traditional search and information gathering on its head. In the past, web socialization has been focused primarily in places like chat rooms (yes, those archaic institutions) and discussion boards. What we have now, however, is the ability for each user to carve out their own little microsite, an internet area and identity that is unique and centralized.
Within our individual internet realms, we have other denizens, our “friends,” those individuals in our social network that we know or respect. Just as we flock to real-life friends with similar interests, so too do we flock to internet-folk with similar interests. I don’t follow hockey players on Twitter- I follow geeks, scientists, sexual deviants, and people with wicked senses of humor. We create our online networks the same way we do our IRL ones. And web developers are looking at harnessing that information to further refine and personalize the internet experience.
Have you ever heard of Delver? Originally launched back in 2008, it began as a search engine that used your social network to generate search results. When you first got to the site, you’d type in your own name. Delver would then dig information out of your social networking sites, building its own network of associated ideas, institutions, and individuals around your personal internet community. Results were then generated with ratings based on sites related to, produced by, or tagged by members of a person’s social network. As Liad Agmon, a former CEO of Delver (I have no idea if he’s still CEO, and I really don’t care enough to look it up), once said, “you are searching the Web through the prism of your social graph.”
Delver no longer operates in this capacity- it has now switched to a social commerce site that works in a similar fashion, targeted at finding products for consumers based on their social networks.
And you thought those targeted Facebook ads were creepy. Here’s an entire site dedicated to ripping through your public profiles and spoon-feeding you things you should buy.
But don’t think Delver is unique. Remember dear old Google? While their algorithms use the power of the many to deliver strong search results, they couple this with individual search tweaking based on your personal searches. Imagine if Google harnessed the power of your social networks in the same way Delver tried to. What we’re looking at is targeted wisdom of the crowd, taking the opinions of your circles (yep, I used that word on purpose- anyone who’s been following Google+ might chuckle a bit there, mostly because the latest foray of Google into the world of social networking might just accomplish this search and social network merger we’re talking about here) and generating content that will be more relevant to you and your interests.
After all, your friends should know you better than an algorithm… right? As Udi Manber, Google’s vice president of engineering in charge of search quality, said, “The art of ranking is one of taking lots of signals and putting them together. Signals from your friends are better, stronger signals.”
This is a form of crowdsourcing, galleons. By essentially outsourcing the task of finding content relevant to you to your friends, search engines could get back the most relevant and fresh results.
And now we can use the power of our group intelligence on the internet to help refine and aid AIs.
Here’s a very basic example. I’m sure most of you have, at one point or another, used an online translation site to attempt to decipher something in a foreign language. And how often has it spit back almost incoherent strings of words and symbols? Better yet, have you ever translated the same sentence back and forth a few times between English and a second language? The result is usually something with little or no relation to the original sentence.
Obviously, online translators are flawed. But how do we fix them? The problem with language and AIs lies in the fact that our communication is flooded with metaphors, puns, and clever wordplay. This is difficult to translate to algorithms for an AI to recognize (though not, necessarily, impossible- remember the TWSS program?). Which makes it hard to get online translators to generate high-quality translations.
And that’s where an AI could tap into the power of people to help it:
Take the counter-intuitive idea of doing translation without bilingual workers. The idea, known as MonoTrans, is the work of Philip Resnik and Ben Bederson at the University of Maryland in College Park. Imagine a Russian and a Spanish speaker, neither of whom speaks the other’s language. MonoTrans software translates the sentence back and forth between the two languages, inevitably imperfectly. But after each translation, the Russian or Spanish speaker edits the text to make it clearer, and it is translated back again. Three round trips are usually enough for the translations to reach high quality, say Resnik and Bederson. A pair of workers should eventually be able to translate 1000 words a day, they add.
Having a crowd doing this, back and forth, would inevitably yield very strong translations. Distilling truth from the masses, the AI would become stronger and better at its job. Like reverse cyborgs, we now have machines tapping into the power of humans to augment their systems.
Amazon long ago realized the potential of using groups of humans to supplement their existing programs. They launched Mechanical Turk in 2005, a site that gives anyone access to an enormous group of online workers. Anyone can work for Mechanical Turk- and thousands do. Meaning that the speed of response can be astounding. For example, the average response time for an image query (applications created to identify images usually use some form of program to determine what the image might be- if the program fails, the image can be sent to Turkers for a response, which serves both to please the customer and teach the program AI) is somewhere around 25 seconds.
Much of this is thanks to the proliferation of smart phones. With the ability to connect from anywhere, at any time, the amount of humans available to help AI is staggeringly high at all times. And growing.
Remember The Matrix? Of course you do (because, frankly, if you are too young to get that reference, you need to get the hell off this blog). In the movie, super-intelligent machines had people trapped in pods (and, consequently, mentally existing in the digital world known as The Matrix), harvesting their bio-electrical energy and body heat to power themselves.
This is kind of like that, only less creepy.
Having a constant, expansive human “workforce” available does allow us to teach and train AIs to a startling degree of precision and, dare I say, humanity.
Here’s an entertaining AI training situation that might amuse you if you are ever bored late one night, dear galleons. Created by Rollo Carpenter, Cleverbot is an AI program learning to mimic human conversation. What makes it unique among the other chatbots littering the web is that Cleverbot uses algorithms to select previously entered phrases from its database of prior conversations when responding to you. Which can be either disturbingly accurate or hilariously off-topic.
However, each conversation Cleverbot has expands its database, giving it more and more to draw from. And the more it learns, the more human-like its conversations should get.
I don’t know. The two times I tried it, it kept trying to get me to talk about love, called me a vampire, and answered one of my pretentiously philosophical questions with “Tom Araya” (a member of the band Slayer)… Amusing, but hardly a believable human conversation partner.
Unless that partner was on drugs. Maybe that’s all Cleverbot can hope for- passing as a stoner.
Still, it’s entertaining for a short while. And hopefully, in the future, the power of the internet’s group intelligence will manage to train Cleverbot to the point where you will forget you are interacting with a computer (right now, there’s no way this sucker could pass the Turing test, in my opinion).
Though, frankly, if the group intelligence of the internet is the one teaching it, all it will probably do is insult you in misspelled, grammatically incorrect, bigoted nonsense. Just like any set of comments anywhere on the internet.
Maybe we shouldn’t be so quick to use crowd wisdom to teach our AIs. Because the internet collective is fucking idiotic.