![]() ![]() The problem is that the cheaper IDE solution was a real bear to replace a drive, so I was really happy to see SATA arrive and we were an early adopter of the AIC 4U 24 drive chassis, which allowed us to pop in three 3Ware 9500S-12MI's with multilane cables and removable drive trays that could actually be hot swapped.īut if you want to really get an education on this, then you have to ask me about the impact of RAID controllers on the costs, etc., but that's a whole pile of knowledge that I don't have the time to regurgitate today. ![]() You could get two of those for less than the SCSI disk shelf alone. Now what you need to remember is that back in 2000, a 50GB SCSI drive like the ST150176LW was running around $830 wholesale, and while that Escalade card was around $500, Maxtor had released an 80GB for around $280, so a shelf of 9 50GB SCSI drives would run around $8000 for 450GB (including enclosure but no server) but a server of 8 80GB IDE's, complete with server AND controller, was only around $3800 - and also gave you 640GB of space. Perhaps you missed the part where I said "the conversion from ATA to SATA made possible and practical." But even before that, which is what allowed a true explosion in density, it was practical to build an 8-drive-in-2U chassis with one of these Right, we were experimenting with ATA first. Newshosting had a lot of fun with this because for a long while they maintained higher retention than they advertised (and higher than anyone else in the business) and any time the competition upped retention, Newshosting just turned up some configuration settings to go a few days better. I firmly believe that you can get better reliability and more capacity at a lower cost that way, comments borne out by observation of the deployment of thousands of disks used to store petabytes of data. Some of you will have probably noticed that I'm an advocate of using inexpensive (not the nearline "enterprise", but rather desktop or NAS-class) SATA disks where possible, and a lot of that is based on the observation that one can buy a LOT more redundancy for the same price using less expensive disks, which are usually the major cost component of a larger storage array. Second, the price points were so horribly off for high-quality SCSI or FC gear that you could buy three or four times the amount of storage with cheap SATA arrays, but the storage wasn't protected by a hardware RAID controller, so we did it in software, storing redundant copies, kind of like ZFS. Prior to that, ATA cabling and controllers had been extremely limiting. ![]() There are several interesting things: first, was that it was actually the conversion from ATA to SATA that made inexpensive large-scale arrays possible and practical. This poured gas on an already simmering retention war between several of the larger providers, some of whom (like UseNetServer) were highly invested in extremely expensive FC SAN gear. And it was all network-based, so each front end server could talk to several (or MANY) spool servers. This included moving away from direct-attach storage arrays and eventually we built one of the very earliest 24-drive in 4U servers that was based, not on SCSI, but on cheap commodity SATA disks. Back around 2000 I picked up Newshosting as a client, and helped them make some aggressive technical moves towards a distributed spool architecture. And you know, the funny thing, there's a tie-in here. ![]()
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |