...more recent posts
I'm very curious why publishing this information (how to defeat a particular audio CD copy protection method with a felt tip pen) does not violate the DMCA? (I think Reuters and C/net published it too.)
I'm not a lawyer, of course, but it seems to be a clear violation. Is it just that the RIAA isn't pressing the issue because it will make them look like fools? Like bigger fools, that is. It certainly would make it clear how over reaching the DMCA is.
In the test, WiFi Metro will integrate Green Packet's SONaccess IP routers and software with WiFi Metro's system of LAN (local area network) "hot spots," allowing users to switch from one network to another as they move in and out of the respective coverage areas, without having to log on and off.That will be an important step. No doubt others are working on this as well.
O.K., now it's starting to get hot. My boasting about not needing an air conditioner for the second straight summer is already looking not so smart...
Dailywireless.org is a good source of news on the burgeoning wireless world with an emphasis on community networking.
Here are some good notes (with some good links) about the kind of networks we should be aiming at (relatively short.) That link is from this quick roundup of Pulver's Connectivity 2002 conference at the always informative satn.org.
This is the technology, although not necessarily the implementation, that I'm waiting for. I don't like the word "patented" in this marketing passage, but the rest sounds right on:
MeshNetworks has developed a revolutionary mobile broadband network architecture based on patented ad hoc peer-to-peer (p2p) routing technology. The result is a self-forming, self-healing wireless mesh where mobile devices become the network.
Blogging their way to the north pole.
Test photos from a Foveon X3 prototype sensor. The Foveon site.
Heading out to Long Island. I'll be connected. Hope you're all getting out of the house too.
More utter madness from the MPAA. I've stopped being too outraged by this sort of thing, and am now taking it as an inevitable (and ineffectual) stage that must be gone through by a dinosaur in death throes. Sort of like cussing madly at the people around you as you fall unhappily to dust. But I hope that's not too optimistic, and so I'm glad there are people still getting pissed off at these ridiculous legal plays...
Still, the better response would be to build networks that can't be shut down.
Mozilla 1.0 RC3 is out. Supposedly this will be the final release candidate before the official 1.0. Real soon now.
Other than the sporadic image file corruption problem, RC2 has been perfect.
Cory Doctorow describes using peek-a-booty for what he calls: Distributed provision of service. That's what I'm talking about. We need alternate routing methods to guard against someone trying to take the internet down.
Interesting new beta features at google. At first glance sets is very cool. Go google (but get rid of that dilbert crap!)
keyboard shortcuts for mozilla.
David Isenberg and David Weinberger spell it out (from Nov. 2001): The best network is the hardest one to make money running. Great statement of the problem. (from htp)
Open content network. This seems really cool. A description of the network is here. I have to do more research on this.
If you use PHP to develop web apps you should be aware that 4.2 is bringing some changes that could break your scripts. Still, my impression is these are good changes, but it does mean there is work to be done. Here's an article on keeping up to date.
So I have my database on the server. That holds all the information on this site. And then I have a bunch of PHP scripts that look at the incoming browser, check for cookies, and then assemble the correct page out of the database.
But I also have a copy of that setup on the machine sitting on my desk (the server is in California, I'm in NYC.) When I'm making big changes I do it on my machine, and then once it's running locally I transfer everything up to the server.
One possibly interesting thing I noticed the last time I did this is that I can also have all the PHP scripts running locally, but accessing the remote database. Hmmm. So I started wondering how much load it would save on the server if all the scripts ran locally (but still accessed the servers database.) And now I've started thinking about having the database local too, as well as keeping a second database centralized on the server.
Here's how it would work. Everything (apache, php, mysql) runs local on my machine. I can make pages, post, edit just like before. All changes effect the setup on my machine. But then each page also has a setting for whether it should be synched to the central server as well. If so, then every post is made to the local machine, and then if the local machine is connected to the internet, the post is also sent to the central server. If the local machine is not connected then a flag is set, and the next time a connection is established, all the new posts are sent to the server.
And then the reverse would have to be true also. On the central server I view pages (some of which are mine, some of which are other people on this site.) For each page I choose whether to mirror it to my local machine. Then when my machine checks in with the central server it looks at the last time stamp for every page I am mirroring and updates my local database where necessary.
This setup would be especially good on a laptop (or smaller machine.) That way I could always be adding to the site, regardless of internet connectivity, and everything would synch up as soon as possible. Also, having the data mirrored in a lot of places has obvious anti-catastrophe appeal.
I'm going to keep thinking about this. I haven't really gotten to the heart of it yet.
Someone must have slipped Ev some of the kool-aid:
You see, I'm being seduced into the Mac camp (for a notebook, at least). I can't help it. They're everywhere. And beautiful. And everyone's in love with them. And the Unix command line and Java integration.... well, damn.
Hot swapable, external, firewire connected, IDE hard drive enclosure. Ugly, but cool.
Everybody is blogging the emerging technology conference. It's almost like being there. Doc has tons. Aaron has 1, 2, 3 days worth. Wes has typically good stuff. Does this (take Wes' for example) constitute a new writing style? Technical stream of consciousness? Actually quite easy to read if you have some background. Data dense, for sure. Joey DeVilla has lots of notes too. There's more, but how much are you really going to read?
O'Reilly has a big conference on emerging technology going on now. As has become standard with these events, 802.11b wifi connections are everywhere, and many of those attending have laptops from which they blog the various meetings in real time. This is where a lot of the business world will be in a few years I think. Anyway, Rob Flickenger set up a program called EtherPEG that sniffs the local network for .gif and .jpeg images. He then collaged them for some very interesting results.
If you've never heard of EtherPEG, its a Mac hack that's been around for a while that combines all of the modern conveniences of a packet sniffer with the good old-fashioned friendliness of a graphics rendering library, to show you whatever GIFs and JPEGs are flying around on your network. It's sort of a real-time meta browser that dynamically builds a view of other people's browsers, built up as other people look around online.Check out the results here. Very cool.
MB went to Bentonville Arkansas yesterday. If you're in the retail world you know there is only one reason to go to Bentonville. She sure has a weird collection of jobs being part restauranteur, part graphic designer, and part local political figure. She used to be just a graphic designer. I can barely remember those days though. Neither of us worked at all the first year we were together. Now we eat dinner together but do almost nothing else.
Sleeping alone is really something if you're not used to it. I'd almost say I liked it, except I know what I really like is sleeping alone occasionally. I didn't wake up until after 11:00! That's something I haven't done in years. I think I needed a long sleep.
David McCusker worked out his problems with Lisa. This makes me happy. Although only hearing from one side (his) has made me suspicious of her. Like I'm protective of him, which is weird since I don't know him personally. I guess because I don't know him, and because he writes clearly about relationship issues in a way I don't do in public, there is a lot of transference that can happen from just reading the words. If I actually knew them I wouldn't be surprised to feel less engaged by the whole episode. Anyway, I hope it continues to go well. For all of us.
The CEO as well as the founder of Napster have both resigned. I'm confused as to why this is being so widely reported. You'd almost think this was important in some way. But Napster has been dead for a long time. Once they shut down the free trading it was obvious they could never transition to a pay service.
There are still file trading networks. And they have more users, trading more files (including copyrighted music files,) then Napster ever had. And they continue to grow. I believe the RIAA will eventually be crushed by these networks. But all this has absolutely nothing to do with Napster.
I fixed back links (I'm dropping 'reference logging' in favor of 'back links') so they no longer register links from the same page, or links from that pages archives.
I can't see any difference between mozilla 1.0 RC1 and RC2. Maybe it's mail or news changes. For browsing both are rock solid (although I've found most everything is rock solid on OSX,) fast, and thoughtfully designed (tabbed browsing, cookie support, and full javascript controls come immediately to mind.) Apparently RC2 should run on MacOS 8.5 and 8.6 although I've yet to confirm this. You can get RC2 here.
Our friends from Montana have safely departed. What a week. It took me all of Sunday and Monday just to recover. Although we didn't specifically plan it this way, after not seeing them for 4 years we will now be seeing them again in 6 weeks when we head out west. That made saying goodbye not so hard.
Ren is one amazing kid. He seemed to take to the city as if it was perfectly natural to have so many millions of people packed into a couple of square miles. Best quote from bed at the end of a very long day and night: "I hear the beautiful music of all the people who don't want the lights to go out." Indeed. I have a feeling we'll see him again in NYC.
Apple is expected to introduce new rack mountable servers today at 9:00 am pacific time. Possibly another product as well. We'll be watching.
Dual screen laptop (via /.)
There is no spoon notes that David Watson
[has come] to the rescue for Radio users who would like to add "autolinking" or automatic "backlinking" to their blogs. He's created a little webservice called "getReferers" that automatically generates a list of links to the sites who are linking to you. It then helpfully puts that list at the end of your page. Allowing your readers to see who is linking to you helps to put your comments in contextHe (she?) also mentions what's different about my implementation: "By the way, I think the way JimsLog handles this referer feature is the best I've seen because it indicates referers by post." Thanks. I don't know enough about how radio works, but if you can script with it on the level of apache environmental variables then it's pretty easy to have your backlink system only activate on requests for specific posts, rather than general links to your page (for me, in my system, this just means watching for a numeric query string - like ?4454 on the end of the REQUEST_URI - which will always be a link to a specific post.) I'd love it if radio got this feature.
A couple of lingering problems. I was sure I had the back link thing filtering out links back from the same page (I don't want it recording back links originating on my own page.) But this doesn't seem to be the case now. I'll get that sorted this week.
My friends from Montana arrive today. I'm very excited. The timing of everything is working out so well. I've been very busy finishing things, and now they arrive just as I am ready to pick my head up, switch gears, and take a long look around. I think Frank was here once before when I did that. In fact, I think that's what started me back into computers in the summer of '99. I wonder what's next.
David McCusker did mention me today, and now it seems that my last name is out. I guess I sort of like that. I wish I could be more revealing about my home life like he is. It's very interesting to read. If it was a book I would be staying up late to see what happens with Lisa and the Taekwondo studio. As this isn't the case MB and I stayed up late finishing Return of the King. This was her first time through. It's great to read out loud.
Finally put the new back end on the other site. Now, if I'm correct, I'm not going to build this project again (this is the third full revision.) I'm going to take a little break from scripting. And then I'm going to try a different project. I don't know what it is yet.
I'll still be here though.
David McCusker is taking note of back links as well (although not mine.) I hope this keeps getting talked about. I want more people to integrate this into their blogging software. Come on, it's not hard. Expose your referers! Now that we pretty much have ease of use in terms of publishing, this is the single largest enabler of the grand conversation.
I won't pretend I didn't have some hand in him noticing, but David Weinberger wrote a nice summary of my reference logging feature (which I should probably just call 'back links' since thats what other people working on similar things seem to be calling it.) Anyway, I am a huge fan of his so this makes me quite happy. Thanks.
Go buy his new book. That will probably make him happy.
We're buying our tickets to Montana right now. Yahoo!
John Udell has a piece on the Disenchanted link back (which is a lot like what I've been calling reference logging.) I wonder if they have automated it in the same basic way that I have? In any case, Udell seems to grasp why this might be very cool inside the blogging world.
Looks like decafbad has a very basic version of the same idea now too. As well as diveintomark. Cool. From what I can tell I'm the only one grabbing actual text off the referring pages. But that just might mean that mine won't scale.
What big bang?
This makes me happy since I've been dismissed off hand more than once for suggesting that the big bang is in no way "proven" to be the true story. In fact, if I understand correctly, it's almost entirely based on the red shift of very distant stars. But this could be caused by lots of things. Maybe the speed of light is getting faster.
(Interesting Shulgin article on this topic.)
While understanding that this truly reveals the amateurishness of my coding abilities, I've posted the PHP code for the reference logging feature I built. (No, this isn't useful in any real way - I'm just posting it in case someone was wondering how I did it. Maybe someone could get an idea from it. But it's too tied to the rest of my system for someone else to be able to use this fragment. Still, I'd like to see others implement their own versions of this feature.)
After a page is served from the database here, the system checks whether reference logging is turned on for that page. If so it includes the snippet of code linked above which determines if there was an external referer who had linked directly to a specific post here (a link to a URL you get when you click on any [link] link.) If so this bit of code gets the HTML of the external page, and parses it so that only the bit of text right around the link to us is left, and stores that text and link in the database here.
It's not pretty. But it does seem to work. I guess, like all of my stuff, it should probably be thought of as a proof of concept. Maybe some day a real coder could write a more elegant version. Still, I'm not sure that version would actually work any better.
I'm very interested to see people's reaction to this feature. This has been hard, so far, because it's not immediately clear what I'm up to. But the implications could be rather large. Especially in the weblog world. There are lots of conversations going on between pages, but no real way for someone unknown to break into the conversational loop. Or rather, the only way for someone unknown to break into the loop is to be pointed at by someone already in the loop. This leads to a certain level of cliquishness. But if all specific references to a page showed up as a link and a snippet of text on the page being linked to (well, actually on a sub page, but noted from the page itself) then new people could be introduced into conversations just by commenting on them.
This takes some power away from the individual author (in the sense that they aren't vetting every single link, some are just appearing.) So there could be resistence on that point. I wonder.
Back to web services. There is some debate over exactly what is meant by this term. I understand it as the web minus the HTML presentation layer. Or, in other words, web services return data in response to specific requests. Web sites, on the other hand, return web pages (formatted in HTML) in response to specific requests. So web services are a good way for computers (or computer programs, really) to talk to each other. A computer program wants data from external sources to be formatted in a rigorously standard way. That's what web services provide. People, on the other hand, want data formatted in a visually pleasing way. That's what web pages (try to) do.
So web services are just standards for communication. As I mentioned the other day, this is something people who build on line applications can get very excited about. Sure, I could always build a program that would connect to a web site, download a specific page, and then sift through the HTML to extract certain information. The problem is that if the web site changes the visual design of their page, it will probably break your program. The piece of data you want won't be in the same place any more. With web services the web site publishes a specification which details exactly how the information will be presented. This might mean a something very basic, like a comma seperated list (like: date,time,theatre,price) or some sophisticated XML schema. The key is just that the structure is agreed upon and doesn't change. This gives third party developers confidence to write software that uses that service. The confidence is that your program will continue to work in the future.
And this turns out to be a huge deal. It's very web like. It's about cooperation.
The recent flood of thinking and writing on this subject has been largely fueled by Google. They published what they are calling the Google Web API. Maybe you remember hearing this term API during the Microsoft trial. It's what some on the government side kept saying they wanted Microsoft to "open up." API stands for application programming interface. The agreed upon data structures that comprise web services are APIs. Web services are what happen when web sites publish APIs and developers build tools that use them. (Microsoft Window's has a set of APIs too. They detail how programs running on top of Windows can make calls to the system to take care of basic low level operations, and the responses a program should expect to get back from the system. Allegedly Microsoft does not reveal their entire API to outsiders, thus Microsoft's own programs - like Word, or Excel - have a huge advantage.)
The Google API is completely open. It allows other programs to query the google search engine. The API specifies how you should send your request (the actual structure of your request) and how the results will be sent back to you. Google is calling this a test. Anyone can use the API, but you have to sign up with them (for free,) you are limited to 1,000 queries a day, and you can't use it for commercial purposes. They can keep track of how many queries you use a day because part of the API specifies that each request must be sent with a unique ID you receive from google when you register.
This is really cool stuff. People like me get very excited when we suddenly gain lots of power for building things on line. I can now write a program that harnesses the amazing data set and algorithms of google. And I can do this in the background, without actually sending my users to google. By publishing their API google has effectively added all the capabilities of google to whatever programming langauage I am using. It almost seems like too much power. It's intoxicating. Still, I can't think of exactly what to build. There's no sense in just writing a front end for searching - google's web page is already perfectly fast and minimal. But there is undoubtedly more that can be done. And lots of people are having a really good time trying to figure this out.
If it works, the web of the future will be largely about web services. And this means that the web will be more and more about assembling the information you view as a user from a variety of different sources which are all live and machine accessible over the internet. Or, in other words, it's about all of us agreeing on the sturcture of the language we're going to use for our programs to talk and work with one another. And agreeing to work together makes us all more powerful. Lots more on this topic to come...
Though the album was rejected by one major label as uncommercial, Wilco's "Yankee Hotel Foxtrot" defied record-industry expectations by selling 55,573 copies in its first week and debuting at No. 13 on the Billboard album chart--by far exceeding the band's past sales achievements....Where are the record company investors? Why haven't they kicked Valenti out yet? Isn't that what's supposed to happen?
Last summer, Reprise Records let Wilco walk away from its record deal because executives said "Foxtrot," an experimental pop album, lacked an obvious hit single and therefore wouldn't sell. The band began Net-streaming the album on its Web site, allowing listeners to preview songs for free.
Rather than hurting the band's sales, the strategy appears to have only built anticipation for the official release.
Here's another new page at datamantic.
A bill has been introduced in Peru which would require the government to use free software.
Microsoft is of course outraged, and has complained. Here is the utterly amazing reply from Dr. Edgar David Villanueva Nunez, Congressman of the Republica of Peru. He says, in part:
To guarantee the free access of citizens to public information, it is indespensable that the encoding of data is not tied to a single provider. The use of standard and open formats gives a guarantee of this free access, if necessary through the creation of compatible free software.Amen. The whole letter is worth a read. (from MeFi)
To guarantee the permanence of public data, it is necessary that the usability and maintenance of the software does not depend on the goodwill of the suppliers, or on the monopoly conditions imposed by them. For this reason the State needs systems the development of which can be guaranteed due to the availability of the source code.
Today is my 33rd birthday.
David Weinberger is wondering about the odd capitalization of Tom's blog title, IMproPRieTies. I thought it was a phonetic fudge of "I'm pretty." IM PR+T
We'll have to wait for some independent verification on this. I mean on whether or not he's pretty.
"The conversation continues..." is another mailing list I'm on (is it still called a mailing list if it's only one way?) This one is put out by Kevin Werbach and Esther Dyson, publishers of the influential Release 1.0: Esther Dyson's Monthly Report. Not nearly as entertaining as EGR, but it is pretty good coverage of the tech world. I don't learn too much new, but they have a knack for summing up the current thinking in an easy to swallow form. The last one featured a piece titled "The New WWW" where Kevin Werbach argues that Weblogs, Web Services, and WiFi are the new WWW.
"The old grassroots energy is coming back. Web services, Weblogs and WiFi are the new WWW."I've been trying to get something together about web services. This is an important emerging area. And it does seem true that "the old grassroots energy" is coming back. We'll see.
Compare Kevin's thought to megnut's:
All this talk about APIs and web services warms my heart. We've passed the nadir of the dot-com hype and we're coming back to the Web in interesting and important ways -- opening up sites through APIs and services and working together to build better and more powerful applications. People are getting excited again about the potential of the Web and it's really great to see.
She points to a recent Kottke post where he sees the same thing: "But I admit that Web services makes me feel just a little bit tingly."
What's all the fuss about? More soon.
(You could subscribe to 'The conversation continues..." here.)
If you look at the bottom of this post you can see the first real example of reference logging. Somebody linked to that post, and when their link was followed this system went out to investigate, grabbed the bit of text surrounding the link on their page, and added it below my post (click on [1 ref] to see the reference page.)
Sort of cool I think.
Also the system keeps track of new references I haven't seen yet, so when I loaded up the front page this morning it listed [1 new ref] next to my page link just like it does for new posts and new comments.
You could turn this on for your page in [editpage].
I don't listen to much rap music. But lately I've been playing Eric B. and Rakim's ground breaking 1986 album Paid in Full over and over again. Seems to perfectly match my mood. Big Jimmy gave me some really nice headphones which I have plugged into my iMac so I don't bug my office mates with the constant repetition.
Thinking of this
you keep repeating you miss
the rhymes from the
microphone soloist
So you sit by the radio
hand on the dial soon
as you hear it
pump up the volume
I think "I know you got soul" might be the best rap song of all time.
While I'm still waiting for the danger hiptop, Sony and Ericsson are about to introduce something similarly cool, in a more traditional cell phone form factor. I guess it's officially to be called the Sony P800. People are going crazy over this one.
Want to get tons of very long beautifully written emails from someone slightly insane? Of course you do. Register here. You won't be sorry.
I am completely shocked by this man's output. Both quality and quantity. His blog, which I've pointed to a bunch before (plus it's listed over there on the left) is here, but the email list is much better.
Heading toward Tarshish is a nice looking new blog using my software.
Here is what might be considered the canonical text for the anti-copy-protect-everything crowd. Long, but well reasoned. It's all in there.
Disenchanted's linkback is something similar to what I've been working on. If I understand their's correctly, I think my implementation is better. If you link to any specific post on this page (to a URL given by a [link] link,) and then follow that link to this page, it will be noted by this page below the post you link to. I call this reference logging. It shows up below the post as [1 ref]. Click on that and you'll see the reference along with a link back to the page that made it. My system even goes to your page and grabs a little bit of text around the link you made and adds that (sort of like a comment.)
Flash time waster:
flashback
Bruce Sterling gave a talk at the Computers Freedom and Privacy Conference in San Fran on April 19th. Sort of rambling, but Sterling is always worth a look in my opinion.
I mentioned the new Apple Powerbook below. That is my dream machine. Really beautiful, I think. Some of the design team from that project left Apple and started a company, oqo, that has released a very small full powered computer. Their flash heavy (why? oh why?) web site is here.
These stats are from the internetweek article:
Smaller than your average paperback book, the standalone device measures 4 inches by 2.9 inches and weighs less than 0.9 ounces. It sports a 4-inch, super-bright VGA color LCD; Synaptics touchscreen; 256-Mbyte onboard RAM; 10-Gbyte+ hard drive; 13914 FireWire; USB; audio; OQO-link connectors; and 802.11b Bluetooth wireless networking.
The "ultra-personal computer," as its makers now call it, is still in development, but is expected to be commercially available from consumer electronics resellers later this year, said Jory Bell, OQO president and CEO.
Here's the news.com article with a tiny picture. I'll try to find some better shots. It really is amazing looking. It runs Windows XP, so I'm not interested in that, but I'm sure it will run linux before too long (if it doesn't already.) This is a good indication of the kind of miniturization that is coming to the market.
So much has happened while I've been away (well, I haven't really been away, but I've been too busy to write.) I'm going to try to catch up on a few highlights.
First up, the mozilla web browser has finally reached 1.0. They are calling it 1.0 Release Candidate 1 (RC1.) So while it's not the official we promise it's done 1.0 release (which should come in the fall) it is a maybe it's done 1.0 release. And really it is done. I'm sure they will do a lot of polishing between now and the final 1.0, but it's ready to go today. It really works well. I strongly encourage everyone to check it out. Moving away from Internet Explorer is very important for the future health (and diversity) of the web. You can download it here (9 to 14 megs, depending on your platform.)
I've been using Mozilla as my main browser since 0.9.4. I thought 0.9.8 was basically good enough - but it had some serious text area weirdness that kept me from recommending it. I'm all about text areas. 1.0 RC1 fixed that and other less anoying stuff. I still don't use mail, or news reader, or composer, or IM - but the browser itself is rock solid (on OS X at least,) renders fast, and once you use tab browsing there is no going back.
It took a long time, but this really is a triumph.
I'll also note chimera which is a project to build a native Mac OS X browser using gecko, the heart of the Mozilla browser project. (Mozilla is all open, so it's not only good in itself, but it enables all sorts of other creative projects.) While I think Mozilla looks good on OS X, chimera looks amazing. Native OS X apps have access to sophisticated text rendering, and this puts it to good use. Chimera is only at 0.2.6, and not really usable yet. But if it gets there I'll switch.
If you're on OS X and want some guidence, click through to the comments below...