Comments on: Backblaze Drive Stats for Q1 2023 https://www.backblaze.com/blog/backblaze-drive-stats-for-q1-2023/ Cloud Storage & Cloud Backup Sun, 25 Jun 2023 18:36:20 +0000 hourly 1 https://wordpress.org/?v=6.4.3 By: Meda https://www.backblaze.com/blog/backblaze-drive-stats-for-q1-2023/#comment-330015 Sun, 25 Jun 2023 18:36:20 +0000 https://www.backblaze.com/blog/?p=108597#comment-330015 What about the WD Black 2TB ? is it reliable enough ?

]]>
By: Carlos https://www.backblaze.com/blog/backblaze-drive-stats-for-q1-2023/#comment-329992 Mon, 22 May 2023 11:12:16 +0000 https://www.backblaze.com/blog/?p=108597#comment-329992 Are you also gonna do Q1 SSDs report, or is that a yearly publication or something like that?

]]>
By: Robert https://www.backblaze.com/blog/backblaze-drive-stats-for-q1-2023/#comment-329975 Sat, 13 May 2023 11:57:48 +0000 https://www.backblaze.com/blog/?p=108597#comment-329975 But brief look at difference in Seagate vs WD at 16T shows that there is strong reliability towards WD

]]>
By: Robert https://www.backblaze.com/blog/backblaze-drive-stats-for-q1-2023/#comment-329974 Sat, 13 May 2023 11:55:28 +0000 https://www.backblaze.com/blog/?p=108597#comment-329974 It would be helpful if there would be chart which shows failure rate of manufacturers per volume. It would show simillar disk that happened to be produced and possibly used at same time

]]>
By: jdrch https://www.backblaze.com/blog/backblaze-drive-stats-for-q1-2023/#comment-329968 Mon, 08 May 2023 22:29:27 +0000 https://www.backblaze.com/blog/?p=108597#comment-329968 How does BB define “failure?” For example, I’ve had a 12 TB Seagate Barracuda Pro in CrystalDiskInfo “Caution” status for approaching 3 years now, with no symptoms. None of my other large capacity (10+ TB) HDDs (Seagate Exos, Toshiba MG, WD Gold, WD UltraStar) have shown as much as a hiccup during that time.

To me, this data isn’t as indicative of HDD reliability as it is that modern datacenter workloads are beginning to eat significantly into HDD life.

FWIW, assuming the HDDs used are datacenter drives, they’d be under warranty at the time of failure anyway.

]]>
By: Francisco Comparatore https://www.backblaze.com/blog/backblaze-drive-stats-for-q1-2023/#comment-329960 Fri, 05 May 2023 10:30:25 +0000 https://www.backblaze.com/blog/?p=108597#comment-329960 Thank you very much. It was a very interesting reading

]]>
By: Kevin https://www.backblaze.com/blog/backblaze-drive-stats-for-q1-2023/#comment-329956 Fri, 05 May 2023 04:46:54 +0000 https://www.backblaze.com/blog/?p=108597#comment-329956 In reply to Granny.

agreed – I mean the annual failure rates at a data center aren’t going to be that relevant to a home user right? not if drive failure has anything to do with usage.

So maybe rather than time, it would be more useful to score drive failure against MB or TB written and read.

]]>
By: Granny https://www.backblaze.com/blog/backblaze-drive-stats-for-q1-2023/#comment-329955 Thu, 04 May 2023 20:35:24 +0000 https://www.backblaze.com/blog/?p=108597#comment-329955 In order to give a better picture, you should add some details about drives, along with power on hours:
– workload (egress + ingress);
– temperature conditions (avg external temp, avg internal temp, min and max temps);
– room humidity;
– more details about failures: type of failure, hardware part that failed (head, controller, motor, platter,…) etc.

]]>
By: John Haller https://www.backblaze.com/blog/backblaze-drive-stats-for-q1-2023/#comment-329954 Thu, 04 May 2023 15:55:15 +0000 https://www.backblaze.com/blog/?p=108597#comment-329954 In reply to Andy Klein.

Given the annual failure rates, but without looking at the dataset, it appears a good majority of the drives made it to retirement without failing. The failure rate and cost of operating the drive (power, real estate, etc) as opposed to a bigger drive presumably both played into retirement decisions for particular drive types. I’ve got 10 drives in one system with two clumps at 7 (6TB) and 9 (4 TB) years old. I’m guessing it’s about time for retirement, and will be able to drop the number of drives to about 4. I just hope I don’t get correlated failures.

]]>