...more recent posts
I've posted before about ZFS, Sun's seemingly amazing open source file system. I have some fairly large (couple TB) ext3 partitions running on hardware RAID5 under linux, and while I've never had any issues <knocking on wood> I've always felt like there must be a better way. And if the ZFS hype is true then it sounds like that better way. Although I guess these things are always open to reevaluation upon inspection.
Still, I'm pretty skeptical these days, and ZFS really does sound great. And now the rumors are that Mac OS X 10.5 (probably announced in January, shipping in March/April) will have ZFS support. Interesting. It almost seems too advanced for the home user, except the whole zpools stuff really makes sense. And it's very Apple. Want more space? Just plug in another hard drive. No fussing with it, or choosing parameters; and it doesn't show up as a separate drive or anything complicated like that; you just get that much more storage added to your pool of storage. Kind of like how you might think it would work if you didn't know too much about how it actually works.
Fujitsu annoucnes 300GB laptop hard drive (2.5 inch.) Nice!
So I'm absolutely stumped about what to do. I have a very nice server I bought over a year ago colocated here in NYC at Peer1. They have been great. Zero problems. But this maybe goes to prove the saying "you get what you pay for" because they are at the expensive end of the cost spectrum for colocation (not way out of line or anything - just at the expensive end of the reasonable spectrum.)
My problem is that I have one project that needs more bandwidth. But it's a little bit of a sideline project, and I can't really dump tons more money into it. That pretty much rules out the easy option which would just be to buy more bandwidth from Peer1 (although that would be so easy I still do think about it.)
Another option would be to move to a facility with some economy bandwidth (really that pretty much means one thing - Cogent bandwidth.) Economy doesn't necessarily mean bad, but just toward the cheaper end of the spectrum. Maybe they oversell their capacity a bit, and/or maybe they don't respond quite as well to issues, and/or maybe latency can be a little high. On the other hand, prices can be significantly cheaper. Like starting at half as much. So in terms of bang for the buck it appears to me like it might be worth it. Of course you don't really know until you use it yourself. How much is a headache worth? How much is worrying about a headache worth?
Then there is the third option, which is to split the difference. I could move my bandwidth hungry project, along with some other hobby projects, to some place with cheaper Cogent bandwidth, and then keep my business customers at a higher quality location. To do that I could either buy another server (would be cheap since I wouldn't need much storage or really much horsepower considering how crazy powerful even entry level servers are today,) or I could even just get a dedicated server somewhere (which would mean some of the administration worries would be lifted from me and placed on the company I was buying from - maybe not a bad idea since I'm at best only a average unix admin.)
And Peer1 actually has another venture called ServerBeach that I could use. You pay monthly for your own server which they supply and initially configure (in a pretty minimal way.) Then I would get root access and could finish the setup however I wanted (i.e., they'd install CentOS 4.4 with all latest patches, along with Apache, PHP, and MySQL - but I'd have to set up mail servers and DNS if I didn't want to use theirs, and anything else I needed to customize - plus I'd be responsible to keep it up to date from there.) That's a pretty attractive solution. One great thing is that customers can log into a web based control panel (not on your server - on the main ServerBeach servers) and power cycle (reboot) your machine remotely. This would give me a lot of peace of mind. Especially now that I have been doing some traveling - it would be great to know I could (most likely) bring the machine back from any issues from anywhere in the world. And then on top of that, if there was some issue I couldn't solve, I could pay them to try to solve it. In my present situation I don't even have that option - I'm really on my own.
Does it sound like I've already made up my mind? Maybe I have. But it's still hard to pull the trigger. Do I really need two machines? This server has been up for over a year and hasn't really had any issues. Maybe I'm being too cautious? I could just move everything to the cheaper bandwidth, and it might well be fine. That would save me money, and the headache of having two machines. But then what if something did go wrong? And in any case, doing anything that involves leaving my present situation is going to be a hassle involving some downtime for the sites I already have up. So I definitely want to get it right the first time.
The only other option is the one I have been taking - keep thinking about it and putting off the decision. That's okay for a while, especially if you really are thinking about it and learning more, but I'm going to have to make up my mind soon. Tick tick tick....
I don't know this blog, so I can't comment on whether the author is doing anything more than speculating, but it strikes me as correct. Hollywood (along with the NY Times) completely misunderstands BitTorrent and in doing so strikes a meaningless deal with the company which will in no way stop people from trading movies using the protocol. LOL. Good going guys. I wrote about this a year ago when the deal was first being discussed.
Been wanting to post some more here without too much luck. Giving it another try. Warning: no proof reading or anything, just unedited ramblings ahead.
I am just finishing up my second big project for my new business. It has been a very slow process to get the whole thing (the business) off the ground, but that is mostly because I am having to learn so many new things. None of it is particularly hard, per se, it's just having to pay attention to everything at the same time. I'm not used to that. Being the only employee is great, and maybe the only way I can work happily, but the downside is just that I end up doing everything, and to put it mildly, I'm not good at everything.
It would probably be more efficient for me to work for another company that is run by people who, you know, actually know how to run a company. But that doesn't seem as fun. So I'm struggling along by myself, probably making a lot of stupid mistakes, and probably spending too much time reinventing wheels. But at least they are my mistakes and my wheels.
Building websites is pretty easy. Building websites that can be maintained and extended through time is more difficult. And building many websites that can be maintained and extended by just one person is even more difficult. This is where almost all my thinking goes.
Standardization is the key. I have spent a lot of time building the tools that will let me build websites. I could probably just code each site by hand in less time than it is taking me to develop an automated approach, but the problem is that once you have dozens of sites built, if they are all running on different code you have hacked together it is a nightmare to maintain. So what I've been aiming at is having just one very flexible code base that runs all the sites, and then as upgrades and bug fixes are made to that code they are automatically rolled out to all the sites.
And I've done pretty well with this. The setup is actually pretty extreme. Here's the quick technical rundown:
OMG. Nokia n95. Of course it will take forever for that to hit the U.S., and possible specs have already leaked for the even more insane n97. I want that.
I'm happy about this news from Tim Berners-Lee concerning the future path(s) of HTML development:
Some things are clearer with hindsight of several years. It is necessary to evolve HTML incrementally. The attempt to get the world to switch to XML, including quotes around attribute values and slashes in empty tags and namespaces all at once didn't work. The large HTML-generating public did not move, largely because the browsers didn't complain. Some large communities did shift and are enjoying the fruits of well-formed systems, but not all. It is important to maintain HTML incrementally, as well as continuing a transition to well-formed world, and developing more power in that world.This is smart. If we could have all jumped to XML at the same time maybe that would be better, but it didn't happen and it's not going to suddenly happen, so we need an easier way to go forward. Perhaps the greatest thing about HTML is how dead simple it is to get started with it. Of course that is also the worst thing about it as well (in the sense that it doesn't enforce itself rigorously so while it's easy to get going it's also easy to fly out of control with it and create truly horrible markup.) Still, I think it is better to stay more on the loose HTML side rather than the strict XML side as we go forward. Maybe not in a theoretical sense, but definitely in a practical sense.
Now just get us some more powerful form elements!
Crazy small foldable computer from Samsung: SPH-P9000. Who names these things?
I'm not a big Adobe fan, and I've always been against Flash if there are any other ways to get the job done, but it does seem like there are a lot of interesting things going on. As the web moves towards a more richly interactive design (AJAXified web 2.0 stuff) maybe Flash starts to make more sense?
In any case, Adobe just made a huge contribution to the Mozilla project that is scheduled to pay off sometime in 2008: Tamarin Project. From the blog of one of the engineers:
Today Adobe announced that the EMCAScript 4 compatible virtual machine in the Adobe Flash Player has been contributed to the Mozilla project under the name Tamarin. It is the single largest contribution to the Mozilla foundation since its inception and consist of about 135.000 lines of source code. The engine is fully open source using the standard Mozilla license, with the Mozilla foundation retaining full ownership.Tamarin will allow for Mozilla (and therefore Firefox) to easily move to javascript 2. And while I'm always a little nervous about Flash (and javascript for that matter) I'm also getting impatient for more powerful scripting tools and this is probably the way we are going to get them. So I'm staying hopeful and will watch this.
Adobe's project Apollo:
Apollo is the code name for a cross-operating system runtime being developed by Adobe that allows developers to leverage their existing web development skills (Flash, Flex, HTML, JavaScript, Ajax) to build and deploy Rich Internet Applications (RIAs) to the desktop.In other words: Flash apps you can download and run on your desktop. I am surprised they didn't do this sooner. Supposed to be ready middle of next year.