S E R V E R   S I D E
View current page
...more recent posts

Damn:

AMCC will showcase HPC storage platforms featuring four 3ware 9550SX 12-port RAID controllers and 48 Hitachi 7K500 500GB SATA II disk drives at the heart of a Pogo Linux StorageWare 548 solution that delivers 24TB of SATA II capacity in a 5U enclosure. The system is powered by Opteron dual core CPUs and will sustain over 1.2GB/s of read bandwidth, demonstrating the industry's most compelling combination of speed, capacity, reliability, and price per gigabyte.
Storage capacity is really exploding, at least at the server level. Those 3ware SATA RAID cards are really nice. We're using the last generation 9500 12 port, and the 9550 looks significantly sweeter (although what we have is more than enough for our needs, so this is more like a geek fetish thing I guess.)

I think desktop computers should come with RAID built in. At least RAID 1 so that people's data is a little more protected from drive failure. RAID 1 is probably okay to do in software (where the CPU does the work instead of an external processor on a RAID card like the 3ware.) And anything past the mid range on the desktop should really come with 3 drives in a RAID 5 with a card. But not only is no one doing this, you can't even put 3 drives inside a G5 tower! (Well, not without a 3rd party bracket scheme which is not supported by Apple.) I guess I see what they are doing - "if you need that much storage buy an external Apple RAID" - but I don't agree. Everybody needs RAID (now that drives are so cheap, why not? Is your data not worth anything? $200 extra bucks?) and Apple's external unit, while it is beautiful and also a pretty good deal in terms of $/GB, is just too big and too loud. Not everyone needs 14 drives. But really almost everyone needs 3 drives.
- jim 12-14-2005 1:17 am [link] [add a comment]

tsync:

provides transparent synchronization across a set of machines for existing files and directories. A transparent synchronization system makes keeping a set of files consistent across many machines---possibly with differing degrees of connectivity and availability---as simple as possible while requiring minimal effort from the user and maintaining security, robustness to failure, and fast performance....

In the Tsync usage model, the user writes a simple configuration file, similar to /etc/exports, describing which directories should be synchronized, and listing one or more other hosts that are part of the Tsync group (although this list does not have to contain all the hosts in the group). The user runs the Tsync daemon, tsyncd, on each machine in the group. Then when the user creates/modifies/deletes files on one machine, those changes are automatically propagated to all the others. So if the user were to add a bookmark on her machine at the university, it would be reflected on her desktops at home. Even if not all of the computers are connected at the same time (such as if her laptop were powered off), then the next time the disconnected machine regained connectivity, it would automatically learn about the change and update itself.
Sourceforge project page.
- jim 12-03-2005 3:56 am [link] [add a comment]

Michael Robertson, who's last venture MP3.com was sued out of existence, has just launched a new music oriented service called Oboe at MP3tunes.com. I don't have any opinion on how he will execute, but this is an interesting idea. It's pretty close to a number of things I am trying to do (on a much smaller scale) as well.

Here's a Boing Boing post with a long note from Robertson explaining the deal:

You can store all of your own music, making your entire music collection playable from any browser in the world. Plus you can also sync that entire music collection and playlists to multiple computers with a single mouse click. Oboe is the jukebox in the sky that can store all library for safety, playback and move your music to any location for offline playback as well.
$39.99 per year. Supposedly unlimited uploads, and 128 Kb/s streaming playback. If he can really pull that off it sounds like a great price to me. I am so curious what someone at that level pays per Mb/sec for bandwidth. Well, okay, I'm not so much curious as jealous.
- jim 12-01-2005 7:00 pm [link] [add a comment]

BitTorrent creator Bram Cohen today announced a deal with the MPAA under which DMCA takedown procedures for infringing content will be "expedited." But don't get too worked up when you read about this. At worst it is meaningless. There is just technically no way for them to stop it - this deal is sort of like securing airlines by prohibiting toe nail clippers. It might make someone somewhere feel a little safer, but it's not actually doing anything to address the underlying issue. I suspect this is another example of the industries (be it Recording or Motion Picture) failing to understand the technology. And perhaps Bram has even pulled off a small coup here, as trading nothing for a little legal cover sounds like a pretty good deal to make.
- jim 11-23-2005 5:34 pm [link] [add a comment]

Tom helped me move the server - ash.datamantic.com - to the data center. I took the old server out, put in the new one, fired it up and everything seemed to be okay. I had put in the new network configuration back at headquarters. But then I tried to SSH into tulip (the current server located in a different data center,) and it wouldn't recognize my password. Weird. ifconfig seemed to indicate that ethernet was up and connected. And getting a response from tulip (even a rejection) further made me believe that things were working. But I couldn't account for the inability to log in.

But luckily (I am learning) I didn't waste tons of time trying to figure out what was wrong. I say luckily because, as I suspected, everything really was fine. I'm back home now I can ssh into ash with no problem. So everything seems to be working. I'll have to try to figure out the weirdness about not being able to get a secure shell going out though, but it's not a big deal at the moment.

So I'm a little behind schedule, but at least it has finally happened.
- jim 11-23-2005 12:08 am [link] [3 comments]

One of the main things I have my eye on is making it much easier for users to upload files to the server. Making a post or a comment is easy, but getting even a small binary file, like an image, is a few more steps, and getting a large one (say, over a meg) is ridiculous - you need to leave the browser entirely and go through a bunch of convoluted steps.

I am looking at the uploading issue from a lot of directions. I think that email is going to be an important upload channel, especially from mobile devices. And I'd like to see some integration with my desktop tools which for me would mean iTunes and iPhoto. I want to have a playlist in iTunes and an Album in iPhoto, and any song or image I put in them is automatically put onto the server. I actually have this working in iPhoto, except you have to click on a program in the dock when you want it to upload - but then it will put everything in that album onto the server and erase the images from the album.

But we really need to be able to upload larger files through the web upload interface. This works fine for things under a meg, but above that it craps out. This is because of the way Apache handles these file uploads. Lighttpd is another, newer, open source web browser that aims to fix this problem. And a few others that will be important to us as well. Seems like this is the web server to use (at least it will be when they get all the kinks worked out) if your site deals with a lot of large binary file uploads and downloads. Here's the Lighttpd homepage. And here's the explanation about large file uploads.

I'm going to still mainly run Apache, but I hope to have Lighttpd running as well for certain virtual domains that deal with its' sort of thing.
- jim 11-20-2005 1:25 am [link] [add a comment]

Okay. Now I've:

Installed: kernel-smp.x86_64 0:2.6.9-22.0.1.106.unsupported
Complete!


That should give me NTFS support although I have to admit to not liking that word 'unsupported' being anywhere near the word 'kernel'. Probably I'm just a too easily freaked out novice though. Here goes...
- jim 11-19-2005 5:41 am [link] [1 comment]

You learn something new everyday. It's not always what you want to learn though.

So I have the NTFS drive mounted on the Mac. And my idea was that if I can just get the Mac and the Linux box to connect directly to each other (over ethernet, not over 802.11b to the router) that everything would go much faster. After all their are gigabit ports on both ends.

I thought maybe I couldn't get it to work this way before because I need a crossover ethernet cable. I'm not sure that is true, since the Mac ports are supposed to be auto-sensing, but maybe that's only when connected to other Macs. Anyway I bought the crossover cable and got it to work.

Hallelujah I thought.

Except the copy still only went at 300 KB/sec. WTF? I guess the network connection was not the limiting factor. I wonder what is? I guess it must be the external drive that is the bottleneck. I'm surprised it can't do better than that though. This is a 2 drive RAID-0 connected over 400 Mb/sec Firewire to the Mac (and then over gigabit ethernet to the server.)

In any case, I just went back to doing it wirelessly because that way I can still be on the internet with the Mac while it is copying.

Not sure how this is going to work though since at this rate it will take a *long* time to make the copy. While I figure out what to do I'm just going to let it run (or crawl) in the background. Almost to 1 GB! Only 369 more to go. :-)
- jim 11-18-2005 11:57 pm [link] [3 comments]

Well, HFS+ is supported in the CentOS 2.6 kernel, which was surprising to me, but NTFS (Windows partitions) are not! Crazy. What a pain in the butt. My system can see it, but it can't mount it unless I make unsupported changes to my kernel. Yuck. Maybe I'll just mount it on my Mac and copy it over, but that is going to be even slower. Damn.
- jim 11-18-2005 9:25 pm [link] [add a comment]

Seems like Sun is really kicking butt lately. A year ago I wouldn't have believed it, but that just shows again how little I know. First they released a sub $1000 SunFire X2100 1U server that looks really sweet. And now plans have been revealed for a massive storage monster called Thumper:

The 4U high system will hold two dual-core Opterons and support up to 16GB of memory. A more unique part of the server will be Sun's use of 48 SATA drives.
Holy cow. And the key to utilizing all that storage is a new filesystem, ZFS, that will be included in Solaris 10. ZFS sounds *really* amazing. The sort of thing that might make someone consider some really expensive Sun gear. Only now their gear isn't expensive any more. Lookin' good Sun.
- jim 11-17-2005 8:00 pm [link] [10 comments]

older posts...