Talk:RAID/Archive 3
This is an archive of past discussions about RAID. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | Archive 2 | Archive 3 | Archive 4 | Archive 5 | → | Archive 7 |
RAID1 read speed is more like a single disk read speed
Let's assume that data is something like ABDCEF... spread across two disks. If stripe size is less then one disk cylinder size, the first disk reads A while the second one reads B. After that, the first one should read C and the second one should read D, but the first disk has to wait while data B passes under it's reading head (data is ABC... and not ACB..., remember) before head reaches data C. The same thing happens with the second disk - one of them will always read even and other one odd stripes, but they are not able to 'skip' data that other disk has already read. If stripe size is bigger then one disk cylinder size, after parallel reading of A and B, the first disk should issue a seek to 'jump over' the next one/few cylinders where data B is stored to reach cylinder where data C begins. So, here we have non-sequential reads with a lot of seek + disk rotation latency penalty. Here's one example: 2 WD Raptors (sda and sdb) form md0 software RAID1:
manchester ~ # hdparm -t /dev/md0 /dev/md0: Timing buffered disk reads: 178 MB in 3.00 seconds = 59.26 MB/sec manchester ~ # hdparm -t /dev/sda /dev/sda: Timing buffered disk reads: 176 MB in 3.03 seconds = 58.09 MB/sec manchester ~ # hdparm -t /dev/sdb /dev/sdb: Timing buffered disk reads: 184 MB in 3.01 seconds = 59.14 MB/sec manchester ~ #
As you can see, streaming speed of the RAID1 is the same as streaming speed of a single disk.
- I looked at the linux raid 1 code and it looks like it's optimized for random access and for a sequential read per disk. The read balancing algorithm basically first tries to select the disk in which the next sequential read sector is the requested sector. If none is found then the closest disk is chosen. So for a single sequential read, one disk will most likely do all the work. Too bad hdparm doesn't take a starting sector number as you'd be able to start two copies at different spots in the array and each would give you 60 MB/sec.
- The raid article discusses theoretical maximums. Each raid implementaion (linux, windows, hardware) will be different. Some will handle sequential better and some will handle random better. How close an implementation comes to the theoretical maximum shows how good it is. As for "one of them will always read even and other one odd stripes", that's just one example implementation (although you should call them blocks in raid 1). Nothing stops the software/firmware from starting the other disk on another head/cylinder as you suggest.
- As an example: The linux raid 1 driver will handle two sequential reads at full speed. The nforce raid controler in my windows box doesn't and seeks like crazy.
- If you wanted to optimise for total time to read in a large file you would split the file in half and read in one hald from each drive. Trouble is the OS doesn't know in advance how much data the app will ask for so it can't really do that, reading in big blocks would approximate that goal fairly well though.
- From your description I suspect the linux software raid was optimised for server use, servers generally do a lot of random access as different clients are requesting different things at the same time so it makes more sense to give different work to the drives than to try and split the same job between them. Plugwash 18:50, 16 April 2007 (UTC)
"There are even some single-disk implementations of the RAID concept."
This claim is made in the last paragraph of the introduction, then never mentioned again as far as I can see. Is this true? How can you habe a single-disk implementation of the RAID concept? Can someone more knowledgeable than me elaborate on this, or remove it? --Stormie 00:02, 14 July 2006 (UTC)
Yes, it is possible to run RAID on a single drive. I have no idea, though, why anyone would want to do it. The point of RAID is to offer redundancy and/or performance increase. While it is possible to RAID on 1 drive, it is not reccomended because of decreased storage space, no redundancy, decreased performance, and no fault tolerance. As with many things, just because it can be done, does not mean it should. Because it is very infrequently done, pointless, and inconvenient, not to mention that it would likely require its own section, I hesitate to expand on the topic. However, because the information is correct and slightly informative, I do not believe it should be removed. Freedomlinux 20:38, 25 August 2006 (UTC)
I have expanded on it a while back (yeah, that's me, the IPd guest... anyway) but now that I look at it, it really is in a random spot. The section talks about history of RAID, and randomly, it pops up with, "There are single-disk versions! Just so you know," and makes even less sense, since it goes from talking about the history to, "There are EVEN single-disk..." (Hey, I just expanded on it, I didn't read around it...). I propose on moving it somewhere, but not giving it it's own section unless we absolutely have to... does anyone have any suggestions on where to move it? -- DEMONIIIK 06:32, 16 February 2007 (UTC)
Wrong image with RAID 1.5
The article writes "RAID 1.5 is a proprietary RAID by HighPoint and is sometimes incorrectly called RAID 15." However, the image associated with this claims to be a diagram of RAID 1.5, while RAID 15 is illustrated.
I have never edited any Wiki's before, so I figured I'd best put this on for discussion instead.
Probable mistake in RAID 10 section
There's an error, or at least an unclear section, in the Linux RAID 10 section which reads:
"This md driver should not be confused with the dm driver, which is for IDE/ATA chipset based software raid (ie. fakeraid)."
There are a lot of RAID drivers in Linux, and three subsystems: MD, LVM (now LVM2), and DM. The MD driver supports various raid levels and is entirely software oriented -- nothing to do with IDE specifically. The DM driver (device mapper) is a subsystem which supports various RAID modes and again has nothing to do with IDE specifically.
128.123.64.215 02:46, 25 July 2006 (UTC)
Vinum volume manager
I've created a article about the vinum volume manager (software raid) See: Vinum_volume_manager
I've added a link to See Also
Also, This site: [Logical Volume Manager Performance Measurement] might prove usefull for the hardware vs. software raid section. I added it to the external links section.
I leave it upto other people to properly integrate this into the article, since I only have experience with software raid and never used hardware raid...—Preceding unsigned comment added by Carpetsmoker (talk • contribs) 00:26, July 29, 2006
Hardware RAID compared to Software RAID.
In my opinion this should be split into a new article. for obvious reasons...—Preceding unsigned comment added by Carpetsmoker (talk • contribs) 01:56, July 29, 2006
I disagree. It belongs with the body of RAID article, or wit ill get overlooked. - Chris. Jan 1, 07
- On the same topic, I've made an attempt at cleaning up the hardware vs software RAID section - there were a lot of meta-data comments in the source, and the prose was even worse than mine! I've tried to wikify as best I can, but am still not 100percent with it.
- I'm thinking, a general discussion of the 3 types followed by a pros /cons for each i.e. :
- Hardware RAID
- Pros
- Cons
- Software RAID
- Pros
- Cons
- Hybrid RAID
- Pros
- Cons
- Hardware RAID
- Any thoughts ? Baz whyte 20:04, 26 February 2007 (UTC)
RAID0 seek times
I removed this sentence:
"If the accessed sectors are spread evenly among the disks then the apparent seek time would be reduced by half for two disks, by two-thirds for three disks, etc., assuming identical disks. For normal data access patterns the apparent seek time of the array would be between these two extremes"
Because it's completely nonsense. Let's take two disks as example... the I/O can start, as soon as the first bit of the data is found.
Case 1: If this first bit is on the disk with the faster seek time, the I/O can start as soon as this disk has accessed the sector. But if the slower disk hasn't found its first block of the data, until the faster disk has reached the end of its first block, the faster disk has to wait for the slower one. For a block size of 128k an a transfer rate of 50MB/s, the disks needs only 2.5ms to read a block.
Case2: If this first sector is on the disk with the longer seek time, the faster disk has to wait.
For Case 1, the seek time can be below the one of a single disk, but the maximum of the reduction is defined by the block size and the transfer rate. For Case 2, the seek time will be higher, only limited by the maximum seek time of the slower drive.
When using synchrinized spindles, both disks have to perform the same operation, thus the seek time will be near the one of a single drive.
Even if the two disks could read/write completely independently, there still is no reduction by one half for two drives or two-thirds for three drives. Everyone who has some knownledge in the theory of probabilities will know, that the reduction of seek times in this case depends on the distribution of the seek times and the distribution of the relative head postions.
Just an easy example: Two identical drives, seek times are 2ms, 5ms or 8ms, all with a probability of 33.33%.
? Average seek time for a single drive: 5ms
Probabilities for two drives:
First 2ms, second 2ms ? p = 1/9, resulting seek time: 2ms
First 2ms, second 5ms ? p = 1/9, resulting seek time: 5ms
1. 2ms, 2. 8ms ? p = 1/9, 2ms
1. 5ms, 2. 2ms ? p = 1/9, 2ms
1. 5ms, 2. 5ms ? p = 1/9, 5ms
1. 5ms, 2. 8ms ? p = 1/9, 5ms
1. 8ms, 2. 2ms ? p = 1/9, 2ms
1. 8ms, 2. 5ms ? p = 1/9, 5ms
1. 8ms, 2. 8ms ? p = 1/9, 8ms
Combined:
2ms, p=5/9
5ms, p=3/9
8ms, p=1/9
Average seek time = 5/9 * 2ms + 3/9 * 5ms + 1/9 * 8ms = 3,67ms
Or another example: If all seek times are in a range between 3ms and 20ms with an average of 11.5ms, how could they go down to 2.3ms for 5 disks?
--JogyB 10:39, 31 July 2006 (UTC)
- Isn't this true for RAID1 as well?
195.159.43.66 14:55, 25 September 2006 (UTC)
This affects all RAID levels, as soon as data is read from or written on two or more disks simultaneaously - at least afaik. --JogyB 07:48, 26 September 2006 (UTC)
Correction: The situation is different for RAID1, as both disks contain all of the data... in this case they are able to read completely independent. --JogyB 21:28, 30 October 2006 (UTC)
- It's not completely nonsense or even partially nonsense. Might not be explained the best way but not nonsense.
- You've made the bad assumption that the disk access must be synchronized. One disk does NOT have to wait for the other to finish what it's doing. If I issue a read for a block on one disk and it takes forever (bad disk, whatever) and then issue reads for 10 blocks on disk 2 (you'd have to know the block size to do this right) the 10 blocks while be read regardless of what's going on on the first disk.
- Second, synchronised spindles means exactly what it says. The spindles are synchronised, not the heads. This is most useful in RAID 1 during a write operation where all disks must be written at once.
- Your probability table is for a brain dead controller in RAID 1 mode which doesn't know which head is closer to the block requested. Your table is saying that both drives would seek and the first drive reaching the block would stop the other drive's seek which is also incorrect. The other drive would still have to complete it's seek (can't abort the read command) so each result would be the maximum number, not the minimum. Your average would be closer to 8ms which would be slower than a single drive which is why I call it a brain dead controller. The table could also be for one single sector access in RAID 1. But you can't measure average seek with one sector.
- In RAID 0 each disk has a different set of data. When you ask for a block (or just one sector in the block, doesn't matter) only one disk in the array is physically capable of retrieving that data. The other disk doesn't have that block so even if it seeked to the same cylinder as the other drive and made it there faster, it would accomplish squat.
- In my example you'd be retrieving one block from each disk in the RAID 0 array. For a two drive array that means two blocks. A single drive (no raid) would also have to grab two blocks to do the same work. Each disk in the raid array would seek to it's block in 5ms (average) at the same time for a total of 5ms. The single disk would need to do two 5ms seeks one after the other for a total of 10ms. 5ms/10ms = 1/2 = half.
- Take your last example with a 11.5ms average seek. A seek benchmark program lets say averages the seek of 1000 random sectors for a single disk and gets 11.5s. 11.5s / 1000 = 11.5ms. Now what happens on your 5 disk RAID 0 array? Each disk does 200 seeks in 2.3s and since they're all doing this simultaneously (and independently) it takes 2.3s total. Now the benchmark calculates 2.3s / 1000 = 2.3ms. That's why I used the word apparent. It looks like the array is performing as a single 2.3ms drive. On RAID 0 the 200 sectors would have to be distributed evenly across the discs to get exactly 2.3ms. If all the sectors happened to fall on one disc then you'd get 11.5ms. Normal access would be somewhere in between. For RAID 1 it's not a problem since each disk has the same data so each drive would take exactly 200 sectors each.
- I'm reverting that edit due to your bad assumptions and that my original text is still correct.
- You're talking about bad assumptions?
- Your assumption means that a disk never read linearly and each block had to be searches separately or at least the degree of fragmentation was n times lower (n = number of disks) for the RAID set... that's nonsense. In reality, the degree of disk framentation will (on average) be exactly the same for a single disk and a RAID set, as the RAID set appears as a single disk for the operating system. So if the single disk has to perform a seek, the RAID set also has to and then we're back at my assumptions. I looked at a lot of Benchmark results and tested several systems myself - I never found the effect you're describing.
- When you're benchmarking seek times you DON'T want to read linearly. You want to seek as much as possible with as little data transfer as possible. Usually that means issuing a verify command for a sector so no data will be transfered but that may not always be possible so you have to read at least one sector and subtract out the time it takes for the read+transfer.
- In this point I disagree. You want to perform as many seeks as possible, however it still should have something to do with "real" application. So a seek test should still be performed with a little data transfer.
- JogyB 10:47, 13 November 2006 (UTC)
- Depends on the benchmark. There's the synthetic kind that measures just one attribute like I describe and there's the "real world" benchmark that you describe. Depends on what information people want. A bunch of synthetic benchmarks measuring different things (seek times, throughput, etc) lets people make an educated decision if a real world benchmark doesn't exist that simulates the access pattern that their particular application does.
- Avernar 05:01, 14 November 2006 (UTC)
- The only assumption I made was thinking that you were talking about low level block I/O. You're talking about file I/O. File fragmentation and how multiple applications access the files can't be predicted. It's the file system that sees the array as a single disk. For software raid the OS's raid driver sees the disks as independent, for a raid add-in card the card's driver knows about the discs. Only for an external RAID enclosure would the OS not have a clue about the drives. The Linux software RAID 1 driver knows exactly where the heads are. For RAID 0 the driver or hardware controller doesn't have a choice. It's the access pattern from the file system and block cache levels that determine which disk to go to. Do the requests favor one disk over the other or are things pretty even?
- "So if the single disk has to perform a seek, the RAID set also has to and then we're back at my assumptions." Now here's why I said you made a bad assumption (no offense meant, BTW). Both discs in the RAID 0 set DO NOT seek for a single read request, only the one that has the data. This is a fact. I'm not wrong about this. If a RAID controller can't do this it's badly designed as it would have to issue a read for the same LBA on both disks even though on one disc it would be the wrong data in RAID 0. Note: There is no Seek command, it's done as part of the Read command. Forgive me for beating this point to death but if you disagree on this point the rest of what I say is moot. So let me know and will discuss this point first.
- There's no discussion needed... I think our problem really was that we were talking about different levels. But still the seek has to performed on both disks when the file is larger than the block size of the RAID. So you have two seeks where the single drive only has one and this leads to the same apparant seek time for single disk and RAID0 (when the disks are synchronized, impossible to tell what happens when they are not). And this is exactly the situation you're referring to in the article.
- JogyB 10:47, 13 November 2006 (UTC)
- No, I was talking about RAID 0 but with a specific access pattern. I do agree with you that we do need the long transfer situation in there as well that you describe above (sequential file reads). But we should also keep what I wrote about the short transfers (random file reads). I'll add that to the article in the next few days or put it in here first so we can look it over before putting it in the main article.
- Disc synchronization is not a problem as the buffer cache will take care of things for regular file reads. In Windows for example you can give a flag on the open to optimize for sequential reads or random reads. So for a sequential file read the data from one disk will go into the buffer and the disk can go off to service other application's read requests. When the other disk gets around to read it's data the file operation for that application will finally complete. If you're using IO Completion Ports then the buffers you supply can fill out of order so it's no problem there. And for a application load the OS uses memory mapped files so they blocks can complete out of order as well. With more that one active application using the discs a sequential read at the application layer may not cause a sequential read at the block layer. That's why I like describing what happens at the block level as only the reader (of the article) knows what access pattern his applications/system is likely to produce. Like I mentioned elsewhere it's probably a good idea to make a section that gives examples of what block/disc access patterns different applications and different machines (desktop/video edit/server/database) generate.
- Now if you do two reads in a row in parallel on random parts of the disk (remember, block level not file level) then two things can happen: 1) Both blocks are from the same disk and the array has to do them serially like a single non raid disk or 2) each block is on it's own disk so the array can do them in parallel at the exact same time.
- Since we can't predict what's happening at the application and file system layer I presented two possible and diametrically opposed extremes at the block level. The first extreme is that all blocks requested are all odd blocks or all even blocks (worst case). One disc gets all the requests and therefore the array performs like a single non raid disk. The second extreme (best case) is that the blocks are requested in a perfect alternating odd/even pattern. Both disks are seeking and reading in parallel and you get half the apparent seek time as compared to a single non raid disk (you'd also get twice the throughput too).
- This is only true if single blocks are read randomly. As soon as there is linear reading (of at least as many blocks as there are disks in the RAID) this is not true anymore. I try to make an example:
- Single Disk: A0 A1 B0 B1 C0 C1 D0 D1 E0 E1 F0 F1 G0 G1 H0 H1...
- RAID Disk 1: A0 B0 C0 D0 E0 F0 G0 H0...
- RAID Disk 2: A1 B1 C1 D1 E1 F1 G1 H1...
- Now if you request A0 C1 E0 F1 then you are absolutely right.
- But if you request A0 A1 B0 B1 and F0 F1 G0 G1 then the single disk has to perform two seeks and the RAID has to perform four seeks distributed on two disks - exactly the same situation for RAID and Non-RAID.
- You're focussing a little bit to much on the block level... in most cases it will be a sequence of blocks rather than a single block that is read.
- JogyB 09:56, 13 November 2006 (UTC)
- The block (stripe size) is typically around 64K by default. So in your example the read of four blocks would be reading 256K worth of data. Throughput would be the bigger factor here rather than seek time. This access pattern is typical of application loads, image loads, audio and video presentation. As long as the filesystem is not too badly fragmented then seek times are not a big issue. Now for a database there will be a lot of small reads typically 4K in size all over the disk. Now here the F0 F1 G0 G1 pattern is more likely and only 6.25% of each block is read. The 4 x 4K = 16k worth of data will just fly from the platters and through the buses and now seek times are the bigger factor.
- Yes I'm focusing a lot on the block level as it's what's between the filesystem and the array. If you don't know what's going on at the block level you have an incomplete picture of what's going on. You even resorted to a block level example above to prove your point so you can see why it's important. Now it's also important to link what type of access at the file level generates what type of block request pattern at the block level. In your case you didn't consider a database access pattern. Could be an idea for a new section, what at the filesystem level causes what at the block level. I saw a question in the newsgroups where someone was asking what raid level should they use for a database.
- Avernar 05:01, 14 November 2006 (UTC)
- Now the problem is that if the parallelism is broken anywhere along the request chain from the application to the discs then you're going to always get the worst case. Just because your benchmarks didn't show it doesn't mean it's not possible especially if it was written to test a single drive and is not multi-threaded. Here's all the things that have to happen to get that extreme:
- 1) Benchmark has to be multi-threaded (or use I/O completion ports on Windows), at least one thread for each disk in the array.
- 2) Bypass the file system and block cache and talk to the block layer directly.
- 3) Each thread issues a read (or verify if it can do it) for the data on one disc only. Split the threads evenly. Need to know the stripe size to do this properly.
- 4) Request only 1 block, we're measuring seek times here not throughput. Spread the requests all over the disks randomly.
- 7) The raid chip/controller/driver must not serialize the requests.
- 5) The low level IDE/SCSI device drivers must not serialize the requests.
- 6) The BUS must not serialize the requests. For SCSI TCQ (Tagged Command Queuing) must be enabled. For IDE each must be on it's own controller channel.
- If all that happens then you WILL get half the apparent seek time of single disk. Now if you change #3 and have each thread randomly pick a block then you'll get more realistic results instead of the extreme best case but it shouldn't fall close to or at the worse case. If you get the performance of a single drive (worst case) then one of the steps other than #3 is causing a problem. Note: for RAID 1 the change to #3 would not change anything since each disc has the exact same data.
- Again: This is absolutely correct for randomly reading single blocks. However, this will nearly never happen, so you're describing a best case with no reference to practical application. And I don't think that this is really interesting to people reading this article. It could be mentioned, but in the way the text is written now, readers may think that you will always get half the seek time when using RAID 0. But - as already mentioned - is not what Benchmarks of RAID0 systems show. Sure, these are synthetic Benchmarks, but not that far away from "real" applictaion as the one you are suggesting.
- JogyB 10:47, 13 November 2006 (UTC)
- "However, this will nearly never happen, so you're describing a best case with no reference to practical application" Nope, database. :) See the answer to your question below for a reason for it. But I see your point where readers might think they will always get that seek time. Let me know what you think after I clarify those sections.
- Avernar 05:04, 14 November 2006 (UTC)
- And to that: "One disk does NOT have to wait for the other to finish what it's doing. If I issue a read for a block on one disk and it takes forever (bad disk, whatever) and then issue reads for 10 blocks on disk 2 (you'd have to know the block size to do this right) the 10 blocks while be read regardless of what's going on on the first disk."
- Read once more what I've written... "as all disks need to access their part of the data before the I/O can be completed". Look at the last word: completed. If the second disk needs forever to read it's block, the first drive can read 10 billion blocks, however the transfer will never be accomplished. Even if the two disks worked completely independent, still the slower of the two disks would define the end of the transfer. Just think about starting an application... won't work with every second block missing.
- Again, you're talking file I/O and I was talking block I/O. I agree that the completion for a read for a single file would not complete if one of the discs were taking forever. But 10 other files being read by 10 other applications might succeed if they lucked out and the data they wanted happened to be on the other disk or already in the block cache. But reading a single file is not usually a seek intensive operation. A database on a busy server on the other hand would put a lot of seek stress on the raid system.
- If the data is completely in the cache or on the other disk, then you are right. But if only a single block hat to be read from the disk with the bad seek time, this will affect the whole transfer. Also see my exmaple above.
- JogyB 09:56, 13 November 2006 (UTC)
- Again, you're talking file I/O and I was talking block I/O. I agree that the completion for a read for a single file would not complete if one of the discs were taking forever. But 10 other files being read by 10 other applications might succeed if they lucked out and the data they wanted happened to be on the other disk or already in the block cache. But reading a single file is not usually a seek intensive operation. A database on a busy server on the other hand would put a lot of seek stress on the raid system.
- Not for a multi-threaded database application. See my reply above. :)
- Avernar 05:01, 14 November 2006 (UTC)
- I'm reverting your edit because you're original text was incorrect and still is incorrect. Better think about your assumptions.
- Now you're being rude. You changed it the first time but that's OK since you didn't know I was still around. It would have been more polite to discuss a request for a change first since it's not something that's obviously right or wrong. So I changed it back with the implied hint "I don't agree with you, everyone else thought it was OK, let's discuss if you think I'm wrong and it needs to be changed.". I'm changing it back since at the moment you're the only one who thinks it's wrong.
- No, I'm not. Look at the "What RAID cannot do" section. And this is what I heard of several people using RAID and read in several articles. In fact, I never heard or read about the reduction of seek times for RAID0. ;)
- Maybe you can show me an article or website supporting your statement (already looked for one, but most websites are referring to Wikipedia).
- I'll leave your text in the article, let's discuss this first.
- JogyB 09:56, 13 November 2006 (UTC)
- I assume you're talking about point #3. Yes for a desktop system you're not going to get much out of RAID 0 unless you're doing a lot of video editing, the doubled read AND write throughput does wonders there. The person who wrote that only focused on desktops and not other things like file or database servers. Like I've mentioned above, I've seen server admins reading this article to get information as well. And he's wrong about the no seek performance improvement. Heck, even you agreed above that the database access pattern that I keep describing will improve seeking. No sure what he means by buffer performance...
- Avernar 05:01, 14 November 2006 (UTC)
- If you're still not convinced let's discuss it further. Convince me I'm wrong and I'll even correct it myself. If anyone else has questions or an opinion join right in. I want this article to be accurate as well. I'll keep checking the discussion page daily as it's not emailing me when someone adds to the discussion...
- Just one question: Is it clear to you, that we're talking about RAID0 and not about RAID1? For RAID1, it is nearly correct what you're writing (in other words: you're describing the best case), however the situation is different for RAID0, as each disk contains only half of the data.
- Yes I'm talking about RAID 0. And you're this close to understanding what I'm talking about. You say that I described the best case for RAID 1. Now here's the core of my argument: The best case for RAID 0 is the same as the best case for RAID 1 as long as one condition is met, that the requests for blocks are spread evenly so that half the requests are for data on disc A while the other half is for data on disc B.
- And here's the other part of my argument: The real world results are going to be between the best case and the worst case. That's what I said in the article and I don't think anyone could find fault with that one. Doesn't matter if 99% of people are closer to the worst case and 1% are closer to the best case. I'm still right.
- You're right concerning the best case, yes. But as general statement (as it is written now) it is wrong... in such a case it's always better to talk about the average (if only a single value shall be provided). As example: If the income of people in the US is between 5.000$ and 1.000.000$ a year and only 1% is close to the maximum, is it correct to write that people in the US earn 1.000.000$ a year? I don't think so.
- JogyB 10:47, 13 November 2006 (UTC)
- There's three reasons why I put those "models" of the best and worst cases in there. First it helps people quickly compare the different raid levels on a more academic kind of level without having to worry about too many details. Second is that you can use those models to figure out the performance characteristics of the hybrid raid levels (1+0, 10, 150, and ones that we don't know about) without having to benchmark them. Third it lets people figure out what performance they'd get if the average number does not apply to their situation. An average or expected number should also be provided but you do have to specify under what circumstances this occurs as different applications and different machine roles have different access patterns. I'd LOVE to see benchmark numbers for all the real world situations but it would be a lot of work. Hopefully we can add all those numbers some day.
- Avernar 05:01, 14 November 2006 (UTC)
- Now if you're getting the worst case then that's statistically highly improbable and I'd suspect there's something wrong with your system or the test you're doing.
- You're example says that the best case is statistically highly improbable ;)... and in my opionion this is true for RAID0 - or give me an example when huge amounts of single block reading will be done. However, all Benchmarks (my own, in forums, websites, computer magazines) I've seen until now show a slightly increased seek time for RAID0 (ok, nearly all, in a few the seek time was reduced by 0.1ms).
- JogyB 10:47, 13 November 2006 (UTC)
- Just one question: Do you really think that random access of single blocks is the main application when operating a RAID0 (or RAID1)? JogyB 23:08, 13 November 2006 (UTC)
- YES!!! For a database server. For any kind of performance on a database server the indexes need to be cached in RAM for the most frequently used tables. The cache is usually primed on startup. From then on most of the access (unless you're running some kind of report) is for single rows out of the database or for rows scattered across the table based on some search criteria. Think of the the DMV or VISA as an example. Thousands of requests for individual records all over the disk. I believe that SQL server uses a 4K or 8K page size and I've heard of one that goes as low as 512 bytes but don't remember which one.
- RAID 0 for a database, maybe (if you need the space, can't afford a lot of disks, and the machine is part of several identical ones in a cluster). RAID 1 with more than just two discs, yes. Hybrid levels that use RAID 0 as a sub component, YES. This is why I think that this information is important.
- And I just thought of a desktop example. P2P applications especially BitTorrent do a LOT of random reading and writing of small blocks all over one or more files simultaneously. RAID 0 would be perfect as RAID 1 would slow down because of the writes.
- Avernar 05:21, 14 November 2006 (UTC)
- I'm writing it as one answer, as I think we have quite the same opinion now. The database also came to my mind when I went to bed tonight, but I was to tired to get up again ;) (my last post was at 00:08 local time). Ok, so I think we agree that it depends on the access pattern... on a desktop system, you might get about the same seek time as for a single disk, a database (or applications with similar access pattern) will profit of reduced seek times (halved for two disks in the best case). I'll leave it to you to add this to the article (and please also to RAID 1, you will get a reduced seek time in any case, but the reduction described now is again only the best case), as I think you're a native speaker and I'm not.
- JogyB 08:20, 14 November 2006 (UTC)