...more recent posts
O.K., I think the 'new post' feature is working again on my page. Let me know if it's not. Thanks.
Booknotes is talking about an idea to enable "block blogging" of important topics (where many people link to the same story from many individual personal sites.) I don't quite get it. I understand what people are after is some idea that certain news topics deserve to be widely covered (maybe things that specifically don't get covered in the mainstream) and that all these personal sites can help out by jumping on the bandwagon and all linking to the same stories. But isn't that exactly what happens already? Without any news feed apparatus? I remember when people used to think all the link overlap was a bad thing.
Paul Ford has been traveling in Israel and thinking about the Holocaust.
My friend J. wrote yesterday about us getting together, and I mailed him back saying I was around Tuesday, Wednesday, and Friday. The fact that it was already 6:00 pm on Wednesday probably made that email a little confusing. When we finally talked I could tell he had that "are you sure you're ok?" sound in his voice. Yes, I'm ok. Just been concentrating a little too much maybe. So what if I thought it was Monday yesterday.
"What we need now, McCloud argues, is new ideas for presenting comics digitally, which will inevitably emerge from many directions; but we'll also need new technologies to make digital delivery easier."
I've been looking more closely at XML-RPC. This is a protocol for communicating with "web services" which themselves don't really exist yet, but soon. Real soon. This is the sort of thing that microsoft's .net is all about. The idea is that computer programs, things like word processors, or spreadsheets, or even computer games can all be thought of as providing "services" to the client (that's you.) Up until now, software programs usually ran on your computer. In the future (or so the story goes) software will more and more run on large servers located "out there" on the web. Your computer will run the software by sending requests to these different servers and listening for the response. This back and forth with remote servers running applications could be done in countless ways, but XML-RPC seems to be making some headway. As is SOAP, but that's a different story. Anyway, the RPC part stands for Remote Procedure Call, and the XML part just means that these calls to remotely running procedures (or programs) are encoded in XML (instead of HTML or ubbi dubbi or whatever other standard you could think of.)
Anyway, like I said, I was digging through the docs on XML-RPC and I came across that little list of services, none of which seem very useful outside of the userland community, except this one: Speller. This is a great example of an XML-RPC service. You send Speller a text string, and it sends you back a list of words it thinks are misspelled (and some guesses at correct spellings if it's able.) Nice. And it won't really be you doing this communicating, it will be some little program on your computer. You won't even notice it. If that doesn't get you excited rest assured that people who write software are very excited. In this new world I'll be able, for example, to add spell checking to any application I write just by making a simple XML-RPC call to the speller server. Once enough of these base services are deployed, we'll be able to build new web apps just by stringing together these basic blocks. This will be tremendously easy. (Of course there's a complicated cost structure issue here, but I'll leave that for another day.)
Would it be better to have these service available locally on your own machine? Well, yes and no. It would be faster to run them locally, but then - in the case of Speller, say - you'd have to install a dictionary program, and worse, you'd have to update it when updates came out, and patch it when security holes are found. The beauty of these distributed services (be they XML-RPC based or not) is that somebody else (the service host) deals with all that stuff. You sacrifice a little control, but you gain a great deal of ease of use. If you have a lot of bandwidth, this is probably a good trade off.
After looking over Speller, I finally realized where I knew the author's (host's?) name - Lance Arthur. He's glassdog, a weblog I used to read, but for some reason haven't been lately. Nice to tune back in. He's written a rather nice piece exhorting us all to be great.
"Good God, don't you realize what you have? Can't you see it sitting right there in front of you? It's everything! It's amazing and colossal and yours for the taking! There are no rules. There are no laws. There's no one here you have to listen to or answer to or pay attention to. This is the time you've been waiting for. Now. It's here."Speller is certainly a great gift to the community, so he's already doing his part. Now if everybody else would just make something cool and give it away...
"Table 2 indicates that the 60 known, largest deep Web sites contain data of about 750 terabytes (HTML included basis), or roughly 40 times the size of the known surface Web. These sites appear in a broad array of domains from science to law to images and commerce. We estimate the total number of records or documents within this group to be about 85 billion."
(via my new fav - thanks dave - wood s lot)
apache == mp3 streaming server
Cool.
Astronomy picture of the day: M51: The Whirlpool Galaxy.
Arstechnica has a nice look at the current state of digital photography.