...more recent posts
There is no spoon notes that David Watson
[has come] to the rescue for Radio users who would like to add "autolinking" or automatic "backlinking" to their blogs. He's created a little webservice called "getReferers" that automatically generates a list of links to the sites who are linking to you. It then helpfully puts that list at the end of your page. Allowing your readers to see who is linking to you helps to put your comments in contextHe (she?) also mentions what's different about my implementation: "By the way, I think the way JimsLog handles this referer feature is the best I've seen because it indicates referers by post." Thanks. I don't know enough about how radio works, but if you can script with it on the level of apache environmental variables then it's pretty easy to have your backlink system only activate on requests for specific posts, rather than general links to your page (for me, in my system, this just means watching for a numeric query string - like ?4454 on the end of the REQUEST_URI - which will always be a link to a specific post.) I'd love it if radio got this feature.
A couple of lingering problems. I was sure I had the back link thing filtering out links back from the same page (I don't want it recording back links originating on my own page.) But this doesn't seem to be the case now. I'll get that sorted this week.
My friends from Montana arrive today. I'm very excited. The timing of everything is working out so well. I've been very busy finishing things, and now they arrive just as I am ready to pick my head up, switch gears, and take a long look around. I think Frank was here once before when I did that. In fact, I think that's what started me back into computers in the summer of '99. I wonder what's next.
David McCusker did mention me today, and now it seems that my last name is out. I guess I sort of like that. I wish I could be more revealing about my home life like he is. It's very interesting to read. If it was a book I would be staying up late to see what happens with Lisa and the Taekwondo studio. As this isn't the case MB and I stayed up late finishing Return of the King. This was her first time through. It's great to read out loud.
Finally put the new back end on the other site. Now, if I'm correct, I'm not going to build this project again (this is the third full revision.) I'm going to take a little break from scripting. And then I'm going to try a different project. I don't know what it is yet.
I'll still be here though.
David McCusker is taking note of back links as well (although not mine.) I hope this keeps getting talked about. I want more people to integrate this into their blogging software. Come on, it's not hard. Expose your referers! Now that we pretty much have ease of use in terms of publishing, this is the single largest enabler of the grand conversation.
I won't pretend I didn't have some hand in him noticing, but David Weinberger wrote a nice summary of my reference logging feature (which I should probably just call 'back links' since thats what other people working on similar things seem to be calling it.) Anyway, I am a huge fan of his so this makes me quite happy. Thanks.
Go buy his new book. That will probably make him happy.
We're buying our tickets to Montana right now. Yahoo!
John Udell has a piece on the Disenchanted link back (which is a lot like what I've been calling reference logging.) I wonder if they have automated it in the same basic way that I have? In any case, Udell seems to grasp why this might be very cool inside the blogging world.
Looks like decafbad has a very basic version of the same idea now too. As well as diveintomark. Cool. From what I can tell I'm the only one grabbing actual text off the referring pages. But that just might mean that mine won't scale.
What big bang?
This makes me happy since I've been dismissed off hand more than once for suggesting that the big bang is in no way "proven" to be the true story. In fact, if I understand correctly, it's almost entirely based on the red shift of very distant stars. But this could be caused by lots of things. Maybe the speed of light is getting faster.
(Interesting Shulgin article on this topic.)
While understanding that this truly reveals the amateurishness of my coding abilities, I've posted the PHP code for the reference logging feature I built. (No, this isn't useful in any real way - I'm just posting it in case someone was wondering how I did it. Maybe someone could get an idea from it. But it's too tied to the rest of my system for someone else to be able to use this fragment. Still, I'd like to see others implement their own versions of this feature.)
After a page is served from the database here, the system checks whether reference logging is turned on for that page. If so it includes the snippet of code linked above which determines if there was an external referer who had linked directly to a specific post here (a link to a URL you get when you click on any [link] link.) If so this bit of code gets the HTML of the external page, and parses it so that only the bit of text right around the link to us is left, and stores that text and link in the database here.
It's not pretty. But it does seem to work. I guess, like all of my stuff, it should probably be thought of as a proof of concept. Maybe some day a real coder could write a more elegant version. Still, I'm not sure that version would actually work any better.
I'm very interested to see people's reaction to this feature. This has been hard, so far, because it's not immediately clear what I'm up to. But the implications could be rather large. Especially in the weblog world. There are lots of conversations going on between pages, but no real way for someone unknown to break into the conversational loop. Or rather, the only way for someone unknown to break into the loop is to be pointed at by someone already in the loop. This leads to a certain level of cliquishness. But if all specific references to a page showed up as a link and a snippet of text on the page being linked to (well, actually on a sub page, but noted from the page itself) then new people could be introduced into conversations just by commenting on them.
This takes some power away from the individual author (in the sense that they aren't vetting every single link, some are just appearing.) So there could be resistence on that point. I wonder.
Back to web services. There is some debate over exactly what is meant by this term. I understand it as the web minus the HTML presentation layer. Or, in other words, web services return data in response to specific requests. Web sites, on the other hand, return web pages (formatted in HTML) in response to specific requests. So web services are a good way for computers (or computer programs, really) to talk to each other. A computer program wants data from external sources to be formatted in a rigorously standard way. That's what web services provide. People, on the other hand, want data formatted in a visually pleasing way. That's what web pages (try to) do.
So web services are just standards for communication. As I mentioned the other day, this is something people who build on line applications can get very excited about. Sure, I could always build a program that would connect to a web site, download a specific page, and then sift through the HTML to extract certain information. The problem is that if the web site changes the visual design of their page, it will probably break your program. The piece of data you want won't be in the same place any more. With web services the web site publishes a specification which details exactly how the information will be presented. This might mean a something very basic, like a comma seperated list (like: date,time,theatre,price) or some sophisticated XML schema. The key is just that the structure is agreed upon and doesn't change. This gives third party developers confidence to write software that uses that service. The confidence is that your program will continue to work in the future.
And this turns out to be a huge deal. It's very web like. It's about cooperation.
The recent flood of thinking and writing on this subject has been largely fueled by Google. They published what they are calling the Google Web API. Maybe you remember hearing this term API during the Microsoft trial. It's what some on the government side kept saying they wanted Microsoft to "open up." API stands for application programming interface. The agreed upon data structures that comprise web services are APIs. Web services are what happen when web sites publish APIs and developers build tools that use them. (Microsoft Window's has a set of APIs too. They detail how programs running on top of Windows can make calls to the system to take care of basic low level operations, and the responses a program should expect to get back from the system. Allegedly Microsoft does not reveal their entire API to outsiders, thus Microsoft's own programs - like Word, or Excel - have a huge advantage.)
The Google API is completely open. It allows other programs to query the google search engine. The API specifies how you should send your request (the actual structure of your request) and how the results will be sent back to you. Google is calling this a test. Anyone can use the API, but you have to sign up with them (for free,) you are limited to 1,000 queries a day, and you can't use it for commercial purposes. They can keep track of how many queries you use a day because part of the API specifies that each request must be sent with a unique ID you receive from google when you register.
This is really cool stuff. People like me get very excited when we suddenly gain lots of power for building things on line. I can now write a program that harnesses the amazing data set and algorithms of google. And I can do this in the background, without actually sending my users to google. By publishing their API google has effectively added all the capabilities of google to whatever programming langauage I am using. It almost seems like too much power. It's intoxicating. Still, I can't think of exactly what to build. There's no sense in just writing a front end for searching - google's web page is already perfectly fast and minimal. But there is undoubtedly more that can be done. And lots of people are having a really good time trying to figure this out.
If it works, the web of the future will be largely about web services. And this means that the web will be more and more about assembling the information you view as a user from a variety of different sources which are all live and machine accessible over the internet. Or, in other words, it's about all of us agreeing on the sturcture of the language we're going to use for our programs to talk and work with one another. And agreeing to work together makes us all more powerful. Lots more on this topic to come...