Lorna Mills and Sally McKay
Digital Media Tree this blog's archive OVVLvverk Lorna Mills: Artworks / Persona Volare / contact Sally McKay: GIFS / cv and contact |
View current page
...more recent posts
blingee by L.M.
Could an artwork actually function as an empirical experiment to study consciousness? If so, the viewer would be simultaneously situated as both the subject of the experiment and the scientific observer. This might sound fishy from a scientific perspective, seeing as the scientific method is structured specifically in order to bracket subjectivity off from observation. But scientists who study consciousness are faced with the epistemological problem that consciousness can only be directly observed by the subject who is conscious.
Even if we set aside the problem of AI, the science of consciousness is particularly challenged in the pursuit of empirical evidence. Because we cannot directly experience another’s conscious state, even neuroscientists using MRI are forced to rely on self-report, meaning that subjects tell the observers how they feel, and the observers then correlate that information with the imaging data they collect. And of course this only works for humans. Monkeys provide the most detailed imaging data on neurons and synapses because invasive experiments are done with them that aren't done with humans. But monkeys can’t tell the scientists how they think or feel. So there is another order of correlation going on, which is that between less precise MRI imaging in humans and more precise imaging in monkeys. Much of what neuroscientists understand about the fine structures of the brain are based on inter-species extrapolations. To add to the complexity of the problem, even within species, as Gerald Edelman points out, “no two brains are identical, even those of identical twins.” Gerald Edelman, Second Nature: Brain Science and Human Knowledge, 2006. pg.28
Jade Rude and Bruno Billio in Empire of Dreams, Phenomenology of the built environment, Contemporary Artists from Toronto at the MOCCA, 952 Queen Street W., Toronto. Until Aug. 15, 2010.
Curated By David Liss.
Russian Mountain and Yellow-Black Tower 2010 Russian Mountain: Canadian lumber, gold acrylic mirror
Yellow-Black Tower: Styrofoam, lacquer
Yes, I like this show.
David's The Oath of the Horatii is in Toronto right now and so is the sketch treatment of the painting by Ingres. They are part of the AGO show Drama and Desire: Artists and the Theatre, "featuring artwork inspired by the theatre, presented 'on stage' with live performers, full-scale sets and period lighting." It's a fantastic summer show. There's a bunch of hilarious Neo-Classical paintings, over-the-top romantic paintings and etchings of scenes from Shakespeare, there's work by Ingres, Delacroix, Degas, Aubrey Beardsley...tons and tons of really canonical drawings and paintings from the history books. The whole thing is staged in a fabulously cheesy theatrical setting with velvet drapery, proscenia and chandeliers, interactive props, and actors in costume wandering around spouting sonnets. When I'd just read the promo I didn't really get the concept. Well, actually I was expecting something sort of exactly like what it is, only bad. But the art is really worth looking at, with or without the theatrical theme, so there's no sense that the curators are trying to force an idea. Instead, the theatrical context is just a really fun environment for spending time with a bunch of great art and enjoying all the cheese factor that's also in the work.
blingee by L.M.
The limitations of the Computational Theory of Mind might seem kind of obvious — while the syntactical structure of language may be formally discrete, context, memory and shifting paradigms are inherently necessary to the construction of meaning. A computer may be programmed with an extensive vocabulary of terms, but the meanings of each unit would have to be computed against a background environment of infinitely variable conditions. This framing problem has proven to be a serious set-back. As computational cognitive scientist Jerry Fodor points out...
The failure of AI is, in effect, the failure of the Classical Computational Theory of the Mind to perform well in practice. Failures of a theory to perform well in practice are much like failures to predict the right experimental outcomes (arguably, indeed, the latter is a special case of the former). For well-known Duhemian reasons, neither shows straight off that the theory in question is false. But neither, on the other hand, do they bode the theory in question an awful lot of good. If having such failures doesn’t keep you awake at night, you’re a lot more sanguine about your theories than I am about mine.Computational Theory is to neuroscience what String Theory is to theoretical physics — a compelling idea that has lost significant scientific credibility due to its failure, over time, to produce empirical results.
Jerry Fodor, The Mind Doesn't Work That Way: The Scope and Limits of Computational Psychology, 2001. pg. 38