There are quite a few choices of brands when it comes to purchasing harddisks or ssd, but which one do you find the most reliable one? Personally had great experiences with SeaGate, but heard ChrisTitus had the opposite experience with them.

So curious to what manufacturers people here swear to and why? Which ones do you have the worst experience with?

    • roofuskit@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      ·
      10 months ago

      HGST does trend towards being a winner, and now with the largest Western Digital drives. But you definitely should pay attention to specific models like you said.

  • Dudewitbow@lemmy.zip
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    1
    ·
    10 months ago

    buying tech products by brand allows brands to sell you a shitty product at premium price points. companies will always shit out bad eggs at times, and its your job to know which product line are bad and not let brand loyalty bypass that.

    at the bare minimum, if you are buying by brand, buy it solely based on customer support, as some companies are significantly better at that than others, which is an objective trait.

  • Bonehead@kbin.social
    link
    fedilink
    arrow-up
    24
    ·
    10 months ago

    I learned a long time ago that the manufacturer doesn’t matter much on the long run. They all have a bad model occasionally. I have 500GB Seagate drives that still work, and some 1TB drives that died within a year. I’ve had good luck with recent WD Red 4TB drives, but my 2TB Green drives have all died on me. I had a some of the Hitachi Deskstar drives that worked perfectly for years when no one would touch them because of a bad production run. I currently have a Toshiba 8TB that I had never heard of before, but seems to be rock solid for the last year.

    Pick a size that you want, look at what’s available, and research the reasonably priced ones to see if anyone is complaining about them. Review sites can be useful, but raw complaints in user forums will give you a better idea of which ones to avoid.

    • rentar42@kbin.social
      link
      fedilink
      arrow-up
      4
      ·
      10 months ago

      Can confirm the statistics: I recently consolidated about a dozen old hard disks of various ages and quite a few of them had a couple of back blocks and 2 actually failed. One disk was especially noteworthy in that it was still fast, error-less and without complaints. That one was a Seagate ST3000DM001. A model so notoriously bad that it’s got its own Wikipedia entry: https://en.wikipedia.org/wiki/ST3000DM001
      Other “better” HDDs were entirely unresponsive.

      Statistics only really matter if you have many, many samples. Most people (even enthusiasts with a homelab) won’t be buying hundreds of HDDs in their life.

  • BigMikeInAustin@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    10 months ago

    With spinning disks, I preferred Seagate over Western Digital. And then move to HGST.

    Back in those days, Western Digital had the best warranty. And I used it on every Western Digital. But that was still several days without a drive, and I still needed a backup drive.

    So it was better to buy two drives at 1.3 x the price of one Western Digital. And then I realized that none of the Seagate or HGST drives failed on me.

    For SATA SSDs, I just get a 1TB to maximize the cache and wear leveling, and pick a brand where the name can be pronounced.

    For NVME, for a work performance drive, I pick a 2TB drive with the best write cache and sustainable write speed at second tier pricing.

    For a general NVME drive, I pick at least a 1 TB from anyone who has been around long enough to have reviews written about them.

      • BigMikeInAustin@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        10 months ago

        An analogy is writing everything on one piece of paper with a pencil. When you need to change or remove something, you cross it out, instead of erasing, and write the new data to a clean part of the paper. When there are no more clean areas, you use the eraser to erase a crossed off section.

        The larger the paper, the less frequent you come back to the same area again with the eraser.

        Using an eraser on paper slowly degrades the paper until that section tears and never gets used again.

      • BigMikeInAustin@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 months ago

        In general and simplifying, my understanding is:

        There is the area where data is written, and there is the File Allocation Table that keeps track of where files are placed.

        When part of a file needs to be overwritten (either because it inserted or there is new data) the data is really written to a new area and the old data is left as is. The File Allocation Table is updated to point to the new area.

        Eventually, as the disk gets used, that new area eventually comes back to a space that was previously written to, but is not being used. And that data gets physically overwritten.

        Each time a spot is physically overwritten, it very very slightly degrades.

        With a larger disk, it takes longer to come back to a spot that has already been written to.

        Oversimplifying, previously written data that is no longer part of a file is effectively lost, in the way that shredding a paper effectively loses whatever is written, and in a more secure way than as happens in a spinning disk.

        • teawrecks@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 months ago

          Afaik, the wear and tear on SSDs these days is handled under the hood by the firmware.

          Concepts like Files and FATs and Copy-on-Write are format-specific. I believe that even if a filesystem were to deliberately write to the same location repeatedly to intentionally degrade an SSD, the firmware will intelligently shift its block mapping around under the hood so as to spread out the wear. If the SSD detects a block is producing errors (bad parity bits), it will mark it as bad and map in a new block. To the filesystem, there’s still perfectly good storage at that address, albeit with a potential one-off read error.

          The larger sizes SSD just gives the firmware more extra blocks to pull from.

          • skittlebrau@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            10 months ago

            Does that mean that manually attempting to overprovision SSDs isn’t necessary for maximising endurance? Eg. partition a 1TB SSD as 500GB.

            • BigMikeInAustin@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              10 months ago

              That would be called under-provisioning.

              I haven’t read anything about how an SSD deals with partitions, so I don’t know for sure.

              Since the controller intercepts the calls for specific locations, I’m inclined to believe that the controller does not care about the concept of partitions and does not segregate any chips, thus it would spread all writes across all of the chips.

            • teawrecks@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              1
              ·
              10 months ago

              As the other person said, I don’t think the SSD knows about partitions or makes any assumptions based on partitioning, it just knows if you’ve written data to a certain location, and it could be smart enough to know how often you’re writing data to that location. So if you keep writing data to a single location, it could decide to logically remap that location in logical memory to different physical memory so that you don’t wear it out.

              I say “could” because it really depends on the vendor. This is where one brand could be smart and spend the time writing smart software to extend the life of their drive, while another could cheap out and skip straight to selling you a drive that will die sooner.

              It’s also worth noting that drives have an unreported space of “spare sectors” that it can use if it detects one has gone bad. I don’t know if you can see the total remaining spare sectors, but it typically scales with the size of a drive. You can at least see how many bad sectors have been reallocated using S.M.A.R.T.

            • teawrecks@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              1
              ·
              10 months ago

              Seriously? Why be like this? It feels like a Lemmy thing for people to have a chip on their shoulder all the time.

              You shared your understanding, and then I shared mine (in fewer words). I also summarized in once sentence at the bottom. Was just trying to have a conversation, sorry.

        • jkrtn@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 months ago

          I thought you meant 1 TB as a sort of peak performer (better than 2+ TB) in this area. From the description, it’s more like 1 TB is kinda the minimum durability you want with a drive, but larger drives are better?

          • BigMikeInAustin@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            10 months ago

            From the drives I have seen, usually there are 3 write-cache sizes.

            Usually the smallest write-cache is for drives 128GB or smaller. Sometimes the 256GB is also here.

            Usually the middle size write-cache is for 512GB and sometimes 256GB drives.

            Usually the largest write-cache is only in 1TB and bigger drives.

            Performance-wise for writes, you want the biggest write cache, so you want at least a 1TB drive.

            For the best wear leveling, you want the drive as big as you can afford, while also looking at the makeup of the memory chips. In order of longest lasting listed first: Single Level, Multi Level, Triple Level, Quad Level.

            • jkrtn@lemmy.ml
              link
              fedilink
              English
              arrow-up
              2
              ·
              10 months ago

              This is great, thank you! My next drive is going to be fast and durable.

    • LanternEverywhere@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      10 months ago

      Yup, knock on wood, I’ve had lots of Seagate drives over the decades and I’ve never had any of them go bad. I’ve had two WD drives and they both failed

  • teawrecks@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    9
    ·
    10 months ago

    Assume your hard drives will fail. Any time I get a new NAS drive, I do a burn-in test (using a simple badblocks run, can take a few days depending on the size of the drive, but you can run multiple drives in parallel) to get them past the first ledge of the bathtub curve, and then I put them in a RaidZ2 pool and assume it will fail one day.

    Therefore, it’s not about buying the best drives so they never fail, because they will fail. It’s about buying the most cost effective drive for your purpose (price vs avg lifespan vs size). For this part, definitely refer to the Backblaze report someone else linked.

    • YodaDaCoda
      link
      fedilink
      English
      arrow-up
      4
      ·
      10 months ago

      2 of my 6 disks are failing thanks to WD’s EFAX line

      Bastards

  • exu@feditown.com
    link
    fedilink
    English
    arrow-up
    5
    ·
    10 months ago

    For hard drives Toshiba, though SeaGate would be my second pick. Fuck WD.

    On SSDs I go on Wikipedia and look at a list of flash + controller manufacturers and pick one of those. (Samsung, Kioxia (I think), Sandisk)

  • ronmaide@lemm.ee
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    10 months ago

    HGST personally, because my failure count over time for those drives has been in the single digits through ~60 drives in around 15 years, though every manufacturer is going to have missteps or failures. I can say I’ve had bad experiences with Toshiba, but I’m sure you can find someone who swears by them also. Ultimately my anecdotal evidence in either direction is an unreliable crystal ball you should take with a grain of salt.

    The suggestion to check Backblaze reports is great, but I’d also recommend to vary your manufacturers if you’re able and instead build your storage solution with the assumption that drives are “wear units” and will fail. If you have some redundancy built in where you’re able to tolerate the failure of one (or ideally multiple) drive failures without losing data, then even though the question still matters, it matters a bit less.

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    10 months ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    NAS Network-Attached Storage
    RAID Redundant Array of Independent Disks for mass storage
    SATA Serial AT Attachment interface for mass storage
    SSD Solid State Drive mass storage
    ZFS Solaris/Linux filesystem focusing on data integrity

    5 acronyms in this thread; the most compressed thread commented on today has 6 acronyms.

    [Thread #555 for this sub, first seen 29th Feb 2024, 00:35] [FAQ] [Full list] [Contact] [Source code]

  • talkingpumpkin@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    10 months ago

    With the very limited number of drives one may use at home, just get the cheapest ones (*), use RAID and assume some drive may fail.

    (*) whose performances meet your needs and from reputable enough sources

    You can look at the backblaze stats if you like stats, but if you have ten drives 3% failure rate is exactly the same as 1% or .5% (they all just mean “use RAID and assume some drive may fail”).

    Also, IDK how good a reliabiliy predictor the manufacturer would be (as in every sector, reliabiliy varies from model to model), plus you would basically go by price even if you need a quantity of drives so great that stats make sense on them (wouldn’t backblaze use 100% one manufacturer otherwise?)

  • deadbeef@lemmy.nz
    link
    fedilink
    English
    arrow-up
    4
    ·
    10 months ago

    I swear allegiance to the only one true storage vendor, Micropolis. The Micropolis 1323A being the embodiment of perfection in storage basked in the glow of the only holy storage interconnect, MFM.

    I wait patiently for the return of Micropolis so that I may serve as their humble servant.

  • MangoPenguin@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    10 months ago

    None of them, because every manufacturer has made good and bad products. Seagate had really bad 3TB drives which gave them a lot of that reputation.

    I just buy whatever fits my budget for HDDs and have proper backups in place. I think almost all of my HDDs are ‘refurbished’ ones.

    For SSDs I look for one with a good TBW rating with a cache in it. Typically I’ll go for used enterprise SSDs as well.

  • RedEye FlightControl@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    10 months ago

    Hard disks, WD/HGST.

    I’ve had good luck with EMC and NetApp for enterprise solutions, Synology for SMB class NAS storage, and rely on TrueNAS/ZFS on supermicro hardware at home, which has been rock solid for years and years.