ashnistrike: (lightning)
I'll be at Arisia in two weeks. I'm excited to go--it's my first time since college at one of my favorite cons, and my first time paneling there. I'll be on:


Judaism's Influence on SF/F - Adams    -  Sat 1:00 PM                      

Jewish theology and culture permeates science fiction across all mediums. What effect has Judaism had on the development of SF/F and fandom in general?

Michael A. Burstein (mod), Ruthanna Emrys, A Joseph Ross, Danny Miller, Ariela Housman


How to Be a Fan of Problematic Things – Alcott - Sun 4:00 PM

*Lord of the Rings*. *Stranger in a Strange Land*. *Scott Pilgrim vs. the World*. Many of us like things that are deeply problematic! Liking these works doesn’t (necessarily) make you a jerk. How can we like problematic things and not only be decent people, but good, social justice activists? How does one's background matter? How does one address the problems? This panel will discuss how to own up to the problematic things in the media you like, particularly when you feel strongly about them.

Gwendolyn Grace (mod), Chris Brathwaite, Ruthanna Emrys, Mink Rose, Jared Walske


Grounding Your Audience in a Sensory World – Douglas - Sun 7:00 PM                    

The five senses are appallingly underrepresented in modern fiction. Without sensory information, it's difficult to grab your audience and drag them into your protagonist's body. How do you portray senses other than sight? Can you use it to portray emotion? Where can you scrounge up alternatives for the words see, hear, feel, taste and smell, or 'sixth sense' (psychic intuition)? Come learn how to describe your world in all of its glorious, sensory detail.

Ken Schneyer, Keffy R.M. Kehril, Ruthanna Emrys, Greer Gilman, Sonya Taaffe


Routing Around Cognitive Biases – Alcott - Mon 10:00 AM   

Most of us have a friend who always plays the same lottery numbers, refuses to travel by airplane "because they're not safe," and thinks music was better when they were a kid. Your friend - indeed, most people - suffers from multiple cognitive biases. How do you make people aware of the flaws in their thinking so that they have the critical tools to avoid such biases in the future? What about the more difficult task of identifying your own biases?

Heather Urbanski (mod), Ruthanna Emrys, David G. Shaw, Stephen R Balzac, Andrea Hairston


Aside from that, I'll be wandering around the con taking advantage of their child care, trying not to spend all my money on dealer's row, and giving away "Lovecraftian Girl Cooties Posse" badge ribbons. And catching up with all my friends who very sensibly live in Boston--who's going to be there?
ashnistrike: (lightning)
Would anyone be willing to beta read a 7800-word science fiction story?  Possible first contact, poly families, xenolinguistics, and dysfunctional academic politics.
ashnistrike: (lightning)
Elizabeth Bear ([livejournal.com profile] matociquala) has an excellent guest post on SF Signal, about disability in science fiction--why it's worth including, how to do it right, and how to do it wrong.  I read it with interest, both because it's a topic that interests me in general and because it's a topic that shows up in my own stories.  I like playing with how deficits get defined, and by who, and how much trouble comes from an actual physical or mental issue versus how much comes from the way society handles it.

But, so, anyway.  The first comment--actually, the first 3 or 4 comments--is S.M.Stirling "pointing out" that within a hundred years we'll have a perfect understanding of biology, and therefore we won't have disabilities, so why should we write about them.

Obviously one could argue with every assumption in that very weird statement.  From a purely scientific standpoint, for a start... since we've never reached a perfect understanding of any other field of inquiry, we have no data points to infer how long it will take in biology.  Nor do we have any reason to suppose that perfect understanding equals perfect control.  We understand computer programs pretty well, after all, having created them.

Also, I just went to a seminar on neuroscience data, and we were all really excited by a database that mapped the physical shape of 13 neurons in the hippocampus.  They had 2000 human neurons total.  Not all from the same human, you understand, or connected to each other.  I'm sure we'll get better at this over the next few years, but from a Bayesian standpoint I would bet a fair amount that perfection will take longer than a century.

But, so anyway.  Circumstances did not permit me to get in a neuroscience slapfight on Tuesday merely because someone was wrong on the internet, and by the time I got back someone else had done it.  Instead, I decided to take Stirling's scientific postulates for granted--we will have a perfect understanding of biology, and perfect understanding allows perfect control--and asked what disability would look like under those circumstances.
Read more... )

ETA: S.M. Stirling, not Steve Brust. Apologies to Brust, whose name was in my head because I just got excited about the publication date for Hawk.
ashnistrike: (Default)
I love my wife.

I got home today and saw a copy of The Watchtower on the coffee table.

Me: Oh, did we get Jehovah's Witnesses today?
S: Yes, apparently some live nearby.
Me: So, what happened?
S: They asked whether I'd noticed all the bad things happening in the world, and whether I agreed that things seemed to be getting worse all the time, and didn't I think that was a sign of the coming apocalypse?  I explained to them about the availability heuristic* and about how rates of violence are actually getting lower.
Me: I love you--what did they say?
S: That it made sense. And they stayed and rested a while before they went back out in the rain.

And now I feel like I ought to put these things together in a convenient pamphlet for the benefit of people not married to psychologists.


*I can't find a good link for this aspect of the heuristic, but in general it's easier to think of bad things that happened recently, because it's generally easier to think of things that happened recently.  And it's definitely easier to think of bad things that have happened during your lifetime.  This leads to every generation imagining a recently lost golden age when this stuff was unheard of.
ashnistrike: (Default)
In and around trips to S's family in Michigan, my family on Cape Cod, and [livejournal.com profile] papersky and [livejournal.com profile] rysmiel for New Year's in Montreal.


All the Dan Ariely you can shake a stick at )


I'm now in the middle of Kahneman's recent book on the same topic.  In addition to being deeper and more intelligently written, Kahneman isn't trying to boost his own ego.  It's refreshing, and I'd recommend it over the Ariely easily if you're interested in the topic.


Somewhere Beneath Those Waves, by Sarah Monette )


Twilight, by Stephenie Meyer.  Reviewed elsewhere.


Brokedown Palace, by Steve Brust )



Reality is Broken, by Jane McGonigal )


Other Media Consumed:


Hadestown )


Plus the usual assortment of podcasts.

Total Books: 6
Recent Publication: 3/6
Rereads: 0/6
Recommendations: The Arielys were for work, but recommended only in the sense that we decided to do a book club on them, pre facto, and technically that was my idea.  Going to be an interesting discussion tomorrow; I'm not the only one who found them annoying. [livejournal.com profile] papersky reviewed Brokedown Palace some time ago. [livejournal.com profile] robling_t is, um, responsible, for Twilight. Hadestown was recommended on the SF Squeecast.
New Music: 1 album
New Media Produced:Some Aphra Marsh, some Highways and Labyrinths
ashnistrike: (Default)
"Give us a dozen healthy memories, well-formed, and... we'll guarantee to take any one at random and train it to become any type of memory we might select--hammer, screwdriver, wrench, stop sign, yield sign, Indian chief--regardless of its origin or the brain that holds it." - Elizabeth Loftus and Hunter Hoffman, 1989
ashnistrike: (Default)
Natural languages are born when communities of children who don't share a language come together. This can be because they speak different languages, or because they haven't got a full language to begin with. Neither isolated children nor communities of adults seem to be capable of doing this.  Newborn languages are almost exclusively learned by children; you get the first adult speakers when those children grow up.  Constructed languages are born when one adult, or a small group of adults, deliberately creates a vocabulary and a grammar.  They teach this language first to other adults.  If a linguistic community forms, it is likely to have more adults than children; the language may never be taken up by a viable population of children at all.  According to the innateness hypothesis, pre-adolescent children have an instinct that eases the learning of language.  This instinct includes the predisposition to look for certain language-like patterns in the environment, but also to create them from non-grammatical language-like input, given a large enough group. The way that new natural languages are born, and the fact that it requires kids, is considered strong evidence for innateness. The innateness hypothesis should also predict, then, that languages deliberately created by adults and learned largely by adults should have different properties than languages created spontaneously by children and learned largely by children.  In some fashion, you should be able to tell by looking at the structure of a language whether it's natural or constructed.  Yes?  Has someone already done this research?  If not, could they please?
ashnistrike: (Default)
H.M. is one of the most famous patient's in the history of neuropsychology. In an ill-planned attempt to cure his epilepsy, fifty years ago, surgeons cut out most of his hippocampus. Following surgery he had (almost) complete anterograde amnesia, language difficulties, and some very strange retrograde memory deficits. Any modern theory of the neurology of memory has to explain H.M.'s peculiar constellation of deficits. He died two years ago, and analysis of his behavioral data is expected to continue for many years to come.

On Tuesday, neuroscientists are going to begin the process of dissecting H.M.'s brain and creating a digital atlas of it. This will be incredibly useful, among other things settling the question of what lesions he had aside from the surgical one. (Decades worth of epilepsy, and epilepsy drugs, tend to make holes.) Everyone is perhaps a little more excited about cutting up a dead guy's brain than is entirely proper and, possibly because of this, they will be putting out live streaming video of the procedure. I will probably tune in myself. I've been running an independent study this semester, during which the student and I spent an inordinate amount of time banging our heads against the latest HM studies. We both feel much less confident in our understanding of the hippocampus than we did at the beginning of the semester. I am very much hoping that the dissection clears up some of that confusion.

The website's cheerful minute-by-minute sidebar countdown is still so wrong.
ashnistrike: (Default)
Sue Gardner, in her contribution to a series in which psychologists talk about one thing they still don't understand about themselves: "I have a dark place inside which at various stages of my life has been occupied by ghosts, daleks and negative emotions."

This should absolutely enter the technical vocabulary of clinical psychologists everywhere. But what does it mean to have internal daleks? Is it part of you that's always angry, that wants to destroy everything and everyone imperfect? Is it the part that wants to be surrounded by a hard shell, safe from vulnerability? The part that likes making loud threats in an electronically modulated voice?
ashnistrike: (Default)
Two things showed up on my friends page today that made me think of this. The first is a brief rant from Tom at Mind Hacks on the subject of "neuroessentialism" (or "neuromysticism"), which he defines as the tendency to invoke neurological terms to support psychological claims, even when they don't actually add anything. He gives an example from Lakoff's Don't Think of An Elephant:

"One of the fundamental findings of cognitive science is that people think in terms of frames and metaphors - conceptual structures like those we have been describing. The frames are in the synapses of our brains, physically present in the form of neural circuitry. When the facts don't fit the frames, the frames are kept and the facts ignored." (When Lakoff says "frames," he means the things that I call "schemata." These are your generalized and often stereotyped representations of how the world works.)


And then Tom says:

"Indeed, for many psychological claims neuroscience can add little or nothing to our assessment of their truth. Taking for example this claim that frame-incompatible facts get rejected, knowing that frames are embedded in brain tells us nothing, but even knowing how frames are embedded in the brain may not be as useful as it first appears. Whatever neuroscientific facts we discovered about frames, the final judgement of the truth of this claim would rely on answers to questions such as is it true that frame-incompatible facts tend to get rejected? In what range of circumstances is this true and how can it be affected? The last word would be behavioural evidence, regardless of what information was provided by neuroscience."

My first reaction is wild cheering, since I'm pretty tired of hearing pronouncements that behavioral psychology is on its way to extinction, soon to be replaced by the ever-so-much-more-scientific methodology of neuroscience. But then I have to stop and consider: is this true? No question, in this case you can't ignore the need for behavioral research (which, incidentally, exists and supports at least a weak version of Lakoff's claim). No extinction for me, huzzah! But what could we add, if we knew something about the neuropsychology of the situation?

Well, the particular location in the brain of those synapses would be useful to know. Are frames--oh, smeg, let me just call them schemata--associated with sensory and attention areas, suggesting that you never even process the conflicting information in the first place? Are they encoded in your hippocampus, meaning that you can probably process the conflicting information short-term, but it will never make it into your long-term memory? Are they in your long-term storage areas, changing your memories over time to become more like your expectations? The behavioral evidence actually supports all of these claims to some degree; neuropsychological evidence might help tease out more exact processes. My suspicion is that schemata are so basic to cognitive processing that there are dozens of different ways they get represented. This would make Lakoff's description utterly useless--but it would be possible to make non-neuromystical claims, using neurological language, that would be both useful and testable.

The other thing was the latest Alzheimer's research. It seems they may finally be homing in on the proteins that actually cause the memory loss. Behavioral evidence can help us diagnose Alzheimer's (it's possible that the earliest indications may come from changes in writing style). Behavioral evidence can help us prevent Alzheimer's (the richer you keep your mental environment throughout your life, the more protection you gain--though nothing absolute). But the cure, when it comes, will be neurological.
ashnistrike: (Default)
Right now, I'm supposed to be doing the write-up of the aging work. But I don't want to. I mean, I really don't want to. I always get bogged down in the sloggy part of the literature review. It'll pass; in the mean time, here's sort of a random update of what I'm doing. Or what I'm supposed to be doing.

The undergrads are really awesome this semester. Yesterday was the wrap-up of evolutionary psychology, and we got into a discussion of whether natural selection has any continuing effect on human biology. My favorite student (yes, I've got one) started talking about memetic evolution as a replacement--only he hadn't ever bumped into the concept of memes before; he'd come up with it on his own. Now mind, I'm pretty clear in my belief that you can't get away from natural selection, even if your species spends a lot of time being a selecting influence on everything else. But the conversations get a lot more interesting when they disagree with me. I never enjoyed teaching before I came here. Then again, at Stony Brook I tried to start a discussion on animal testing, and couldn't get anyone in a class of 50 to start an argument.

I had a Brown Bag talk yesterday. At most psych departments, this is a longstanding tradition, in which every professor and sometimes the grad students takes a weekly turn at getting up and telling everyone what they've been working on. Apparently they've never been able to get one off the ground here, so I'm giving it a try. Our first one went well, and we had a full conference room. Mine only had about 7 people; apparently we had some trouble getting the word out. Still, I got a lot of good feedback, and a fun informal discussion. I also got to use, as my central example of a non-credible source, a Weekly World News headline announcing that Dick Cheney Is A Robot. My old advisor somehow got through my head the idea that academic talks are a form of performance art. I persist in believing that this is a Good Thing. Obviously my colleagues agree, since I got here in the first place with a job talk that started by talking about the dangers of dihydrogen monoxide.

The rest of this month continues to be meetings, with a sprinkling of presentations, followed by meetings. I've got to convince the human subjects board that I'm not about to lock my participants up in an oubliette. I've got to explain to the interdisciplinary nanotech group why, even though it's not hard to get people's opinions on something that doesn't exist, it's hard to get opinions that mesh with the ones people will hold when the thing does exist. And there's a full faculty meeting that just promises to be hysterical fun, or not. Though, in an effort to increase attendance, we have been promised that the university president will not give a speech. Outside of work, this weekend Nameseeker and I have that SF society party that we promised to host, because we were crazy. I'm looking forward to it, because, well, I'm crazy.

Oh, and about fifty years ago, a guy named Clark Hull theorized that long strings of goal-oriented behaviors were learned by what he called fractional antedating goal responses (Hull kind of got off on creating terminology). What this means is that, if you are learning that a certain sequence of behavior results in having dinner ready, first you associate putting the food on the plate with eating, then you associate having the food in the oven with putting it on the plate, then you associate prepping the roast with having it in the oven, and so on all the way back to the grocery store. The behavior always occurs forwards (unless your name is Billy Pilgrim), but the mental representation starts with the reward and works backwards. Anyway, he wouldn't have used my example--he would have talked about a rat associating the final corridor of a maze with the cheese, then associating the previous turn with that corridor, and so on back to the start of the maze. And today, the New York Times reports neuropsychological findings showing that he was right. Since Hull never got a chance to directly test his theory, this is pretty cool. For definitions of cool that involve being a learning theory geek, at least.

Hobbitses?

May. 19th, 2005 05:34 pm
ashnistrike: (Default)
So now it turns out that there's a community of pygmy humans living on Flores very close to the site where the hobbit skeleton was found. Is this evidence against homo floresiensis being a different species? This article says that they exist--but it doesn't give any indication that they have the same cranial oddities as the archeological specimins (i.e. the small but fully-formed brain). They're a bit taller, by about 50 cm on average, but humans have gotten taller over the last few centuries too.

Although it's not in the article, I have a hypothesis that I like. I've seen some suggestions in recent years that modern homo sapiens actually have some neanderthal DNA--that we interbred with them when we went north rather than just killing them. So maybe these relatively tall pygmies are the modern offspring of h. sapiens and h. floresiensis ancestors? Someone needs to take DNA samples in Rampasasa.
ashnistrike: (Default)
Since I spent the previous post correcting the misconceptions of others, I thought I'd share one that I just got fixed on Friday.

So there's this phenomenon known as infantile amnesia. A baby, or a toddler, obviously has long-term memory. She recognizes her parents, or even people she hasn't seen in a while, remembers what she did the last time she was with them, can tell you what she did yesterday (once she can talk), and so on. However, once you get a little older, some sort of curtain seems to fall. Your "first memory" is somewhere between 3 and 5, sometimes a little later. You can no longer remember the things about being 2 that you could remember at 2-and-a-half.

The (speculative) explanation that I'd always heard was that around this time, your language skills get good enough that you start thinking verbally, and encoding your memories linguistically--and this mode of thought is so different from the previous one that you can't access the old memories. Creepy, huh? But it fits in with what we know about state-dependent memory. It's harder for you to access happy memories when you're sad, even though that's a much less dramatic difference.

So I was at a conference on Friday, having dinner with some fellow memory folks, and talking to someone who works with kids about cross-cultural differences. She mentioned, casually, that infantile amnesia ends an average of 2 years later in China. First memories tend to show up in the 5- to 7-year-old range. Language is learned within the same time frame in both America and China, so this completely destroys the idea that the amnesia is related to lack of linguistic skill. My first reaction, of course, was WTF?

A tangent here, so that the explanation makes sense: America is more-or-less an individualist culture. That means we see the person as the basic unit, one that relates to others but acts and thinks on its own. We value standing out, "finding yourself," and personal achievement. China is what's refered to as a collectivist culture. Collectivist cultures see the group as the basic unit, and action and decision-making as things that happen through cooperation and interaction. They value harmony and the prioritization of group goals over one's own.

So an American child comes home from a trip out with grandma, and gets asked: "Where did you go? Oh, you went to the zoo? What did you see? Did you feed the elephant?" The Chinese child is much less likely to be asked this type of question. The issue isn't linguistic skill, but practice creating personal narratives for others. In other words, the reason the individualist culture has the earlier memory is because of the interactions the child has with others!

This is the coolest thing I've heard in ages. I am such a geek.
ashnistrike: (Default)
In response to a request on [livejournal.com profile] ozarque's journal, an attempt to dispell a couple of popular myths about brain structure.

Let's start with hemispheric organization. Pop psychologists can spend hours discoursing on right-brain/left-brain dichotomies. Ostensibly, the left hemisphere is responsible for analytical, intellectual, rational thought. The left side of your brain is staid and conservative, unemotional, and quite possibly a tool of the patriarchy. The right hemisphere, by constrast, is said to be creative, holistic, intuitive and emotional. It is probably planning the overthrow of all conventional ideas about politics and art at this very moment. Furthermore, everyone's thought processes reflect the dominance of one hemisphere over the other. In order to increase your creativity, you must practice your right-brain skills (it's assumed that most people in patriarchal American culture have ended up left-brained, or at least that there aren't any artists with a desperate yen to improve their mathematical abilities). Obviously there's an assumption here that hemispheric dominance is learned. You can be whatever you want, with enough effort. Or you can just spend all day taking internet tests that tell you if you're right-brained or left-brained.

My lovely Intro to Learning textbook (An Introduction to Theories of Learning, by B.R. Hergenhahn & Matthew H. Olson) calls this "dichotomania." It's based on a kernel of truth, but that kernel has gotten pretty buried.

Your cerebral cortex is, indeed, divided into two hemispheres. The do have functional assymmetries, the most obvious of which is that each is responsible for the opposite side of the body. The right hemisphere takes sensory input from, and sends motor output to, the left side of the body, and vice versa. This is so weird that it's been suggested that in the distant evolutionary past, vertebrate heads somehow mutated and flipped around 180 degrees. If you are strongly right- or left-handed, the motor cortex on the opposite side of your brain will be slightly more developed in the section responsible for your arm and hand.

The major difference between the right and left hemispheres is that, in most people, the left hemisphere performs most language functions. All speech production and much of comprehension originate in this hemisphere. This is where you store vocabulary, put together sentences, piece out meanings. The equivalent areas of the right hemisphere are responsible for what Ozarque refers to as non-verbal communication--particularly the "tune" of your speech. Damage this area, and you'll speak with completely flattened affect, or be unable to tell the difference between "You ate the last donut" and "You ate the last donut?!!!" Musical tunes also seem to be more a right-hemisphere thing.

For some people (about 15% of left-handers), language is in the right hemisphere instead. Another 15% of left-handers have language evenly divided across hemispheres. This doesn't appear to have any effect on your thought processes--it's only important if you go in for brain surgery.

Overall, the left hemisphere does seem to be more detail-oriented, and obviously more verbal. Processing in the right hemisphere does seem to be more holistically-oriented. However, there is no basis for the idea that people have any sort of hemispheric dominance. None. Most thought processes involve both hemispheres to some extent. Under normal circumstances, they communicate constantly, so that everything gets both broken down into details as well as perceived as a whole. You've got to be able to do both of those things in order to, for example, understand that the thing in front of you right now is a computer.

In fact, there's some evidence that creativity is a function, not of right-brain activity (where would that leave writers?), but of cooperative activity between the two hemispheres. The primary connection between the two is a wad of axons called the corpus callosum; there's been some demonstration that people who score high on tests of creativity have slightly thicker CCs. (Standardized tests of creativity: a rant for another time).

Now, certainly, some people tend to think more intuitively, some more analytically. It's the difference between someone who enjoys painting and someone who really loves computer programming or calculus. There are also people who are pretty good (or pretty bad) at both. What you like to do, or what you're good at, is a more complicated issue than which hemisphere is lording it over the other one.


The 10% myth explanation is, happily, shorter. You use all of your brain. You don't the whole thing all the time (that would be severe and probably fatal epilepsy), but everything that's in there right now has a function, and gets called on on a semi-regular basis. We know this because any connections that don't get used become weaker over time and eventually go away (we're talking about the learning and reasoning areas here--what happens to old memories is a more complicated question). This is especially notable in infants. A baby starts with an enourmous number of connections, and much of their initial learning is a matter of pruning connections that don't actually reflect the universe they find themselves in. It's like carving a statue from a block of marble, chipping away those bits of stone that aren't part of the image of a horse (or whatever you feel like carving). The only difference is that you also regularly grow new connections, as you learn things you didn't know before. The phrase "use it or lose it" is an apt one here. Skills that are used become stronger; skills that are neglected fade.

We know, roughly, what most of your brain does. That is, we can point to a section that's responsible for movement, and another that does something with language, and another that lights up when you do crossword puzzles and we're still arguing over what that means. Obviously the map is more detailed in some places than in others. However, if someone makes an argument for psychic abilities that starts with "scientists only know what 10% of your brain does--you must be doing something with the rest of it," then you probably want to look around for a better class of parapsychologist.
ashnistrike: (Default)
My good learning textbook has fun words in it.

Dichotomania: the desire to divide all methods and styles of cognition simplistically into "left-brain" and "right-brain" thought. I've been haranguing my kids about this for years, but I've never had a word for it.

Spandrels: an evolutionary term--side-effects of a mutation with adaptive benefit, that aren't the original adaptive "purpose" but are cool anyway. So the immediate adaptive benefit of increased brain size in humans might have been better problem-solving skills, better memory for food and predator location, language, or any number of other possibilities depending on your pet theory (it's hard to test these things). Spandrels would include the ability to compose symphonies, long philosophical rants at 4 AM, and democratic constitutions. I love this word. It sounds like extra sparkly bits that got added on to something already beautifully functional, just to make it more decorative. It makes me feel like I'm walking around dressed up all fancy.

The cynics may now try to figure out an equivalent word for traffic jams, water pollution, and Welsh-language television (no offense intended to Ibliss--just a Good Omens reference).

Spandrels. Spandrels, spandrels, spandrels... it's better than "plethora!"
ashnistrike: (Default)
This started as a comment to [livejournal.com profile] cynthiarose's latest entry, but it got long. The original article is here, and the original report is here. The gist is that human cortex size suggests that we are capable of representing about 150 individuals as real people, and empathizing with them and trying to treat them well. Everyone else is outside of our little bubble. Of those 150, only about 12 are likely to be particularly intimate. Cyn went into the deeper philosophical and ethical implications of this, and wondered what it meant for religious beliefs like "If I hurt another person, I offend god," and "What I put out into the universe will come back to bite me in the ass." That last one is the Law of Sympathy or the Threefold Law, very roughly summarized.

Behold, as I answer a deep philosophical post with cognitive psych geekery.

Most of the numerical limits on our thinking have work-arounds. For example, we're only capable of keeping 7 (plus or minus 2) items of information in short-term memory. However, that can be 7 random letters, 7 words, 7 sentences, 7 theatre monologues...with practice, you can get some very large items in under that limit. It's called chunking, and depends on how large you can make items and still keep them meaningful to you. It's no stretch to think the same thing could be done with what I'll call Empathy Group Size.

So, the Law of Sympathy is one way of chunking your treatment of large numbers of people, and quite a rational way of dealing ethically with a large population. So is "I want to work for the betterment of poor people." So is "all politicians are idiots" (also a way of chunking, that is--not also rational). How we treat the bulk of humanity depends on whether we chunk them into units that are well-treated or poorly treated. So, "if you hurt another person, you hurt your one true god" chunks about 6 billion units into one unit--a marvel of cognitive efficiency.

Now, separately, the geek questions. I'm wondering about maximum pantheon sizes. Twelve, for intimate relationships? Or 150, for meaningful individuals? Do we have to trade off gods and people? What about pets--if you take in a cat, is that one less person you can care about, or do they go in a separate category? And is Empathy Group Size correlated with short-term memory size? (Since short-term memory size is normally correlated with IQ, that would have some interesting, if doubtful, implications.) Do we have an open slot or two into which we stick people currently in the news?

Right now, I've got a slot for "my class that meets this evening, which I still need to finish prep for." It's a motivation lecture, of course, and my third of the week.
ashnistrike: (Default)
Since Nineweaving asked about my work:

You've got to begin somewhere. I started life as a misanthrope, without much sympathy or appreciation for my species—indeed, without much certainty that I was really one of them. Over the years, as I've practiced both my art and my science, I’ve come to love my species in the way that one can only love someone with a full understanding of their flaws. Both writing and psychology, in their own way, are fundamentally about feeling that way about humanity. Like the best characters, our weaknesses are inseparable from our strengths.

Our memories inform every other aspect of our cognition. What you do, every moment, is informed by what you did before, or what was done to you. Rather, it's informed by your recollection of this, which is not the same thing. When you encounter the world, only some of what you sense and think is recorded. There are fuzzy spots, missing pieces, and times when you were simply looking the other way. However, we are, as I tell my freshmen, meaning-making machines. Another way of saying this is that we are story-making machines. We want our memories to form a cohesive, interesting narrative—one that not only acts as a useful guide to what may happen next, but that has us as the protagonist. To this end, we fill in the blanks, reconstructing what seems most plausible or desirable in them. We rewrite other parts, based on what experience or wishful thinking tells us ought to be the case, or on the beliefs we expect to find evidence for, or on what our friends remember about the same incident. This last part allows us to construct cohesive narratives at a societal level—vital for holding the tribe together, but dangerous when the narrative says (as it usually does) that our tribe is the best, bravest, and most noble of all the tribes in the world.

When I first describe this process to people, many of them think that this sounds like a problem—that eidetic memory would be a much better idea. Of course, if we created a perfect record of our sensory impressions, uncolored by inference or rumor or emotion, there would be some definite advantages. The logarithmic table that you were forced to memorize in high school would be available to you eternally, and even after a bad break-up, it would be just as easy to recall the good times as the bad times. The problem is that the processes and structures that cause memory to work this way are the same ones that allow us to draw inferences in other circumstances. This game of "fill-in-the-blanks" is at work when you recognize that the voice on the phone is your lover, and can picture her exact expression as she talks to you. It's at work when a scientist sees a falling apple and thinks of an equation to explain it. It's at work when you look for your parked car, and recognize it by the rear bumper poking out from behind the SUV. It's at work when a writer makes a new story out of the composted bits of childhood and half-remembered dreams and the drunk who was ranting about Kennedy on the streetcorner last week. Every imaginative process is dependent on the imperfection of our memories, and the ways we compensate for it.

More specifically: The parts of this that I focus on are the bits where I talk about desire and wishful thinking and prior belief. I look at how we rewrite the world to make it more like we want it to be. My early work shows how people sometimes remember that they learned information they want to be true from sources that are usually right (even if they didn't), and information they don’t want to be true from sources that are usually wrong (even if they didn't). In moderation, this sort of error can be a good thing—optimism is often self-fulfilling. However, sometimes it can lead to lousy decision-making, because you’re acting on a false model of the world.

My pet project at the moment is source credibility judgments—i.e., how do you decide that the New York Times is more trustworthy than the National Enquirer (or, depending on your preferences, that Indymedia is more trustworthy than the New York Times)? Why can't your grandmother (by which I mean, my grandmother) tell that Publisher’s Clearinghouse sweepstakes notes are *not* trustworthy? Eventually, I want to find ways to train people to be better at this, so that a cancer patient researching the latest findings on the web, or my grandmother, can tell good information from scams.

Ideally, I'd say the ultimate goal of psychology is the cure for human stupidity. I'm not at all certain, given what I said above, that it's likely or even desirable that we'll actually reach it, but I believe in the value of impossible goals. Reaching for this one makes it that much more likely that we'll survive our weaknesses long enough to fully enjoy our strengths.

Profile

ashnistrike: (Default)
ashnistrike

January 2019

S M T W T F S
  12345
6789101112
131415161718 19
20212223242526
2728293031  

Syndicate

RSS Atom

Style Credit

Expand Cut Tags

No cut tags
Page generated Jun. 26th, 2025 12:34 pm
Powered by Dreamwidth Studios