While making devices more repairable is pretty much seen as universally a good thing, right? Unfortunately, engineering involves tradeoffs, but some of those tradeoffs that are seen as bad for repair (or are actually desirable in spite of it), or actually improves reliability. These are some things I suspect right to repair advocates forget.
This article is intended to unify some disparate thoughts on the subject I’ve had on Lobsters comment, this blog (i.e. the ThinkPad one), etc. as one post. I intend to do this more often for other things…
Computers last longer than they used to
This is less related to reliability, and more about how upgrading systems just isn’t like how it used to be. Computers were obsolete out of the box and became less useful over the years quickly, especially during times like the gigahertz war. Back then, upgrading was likely to be done out of necessity. Nowadays, you can use a decade (if not older!) computer on the web. It might not be the fastest performer, but it’ll get the job done, which is more than we can say for a decade or two ago; a 486 was a decade old in 2000, but mostly useless for the internet of the time.
Nowadays, a decade old system is something like a Sandy Bridge i5. By the time it falls apart from normal usage, it’s likely any upgrades likely have far outstripped the system, and it’s not economically viable to upgrade the system. It might be to repair it and continue using it, but there’s a good chance that perishables like batteries are no longer made, and new-old-stock has likely decayed in storage. Perhaps it’d be time to upgrade; a decade of useful life in normal situations is still pretty good for a computer, especially considering how they used to age like milk.
Battery interlude
That said, I am sympathetic to batteries here, because they’re the biggest wear items with laptops other than the hinges. That said, it’s not a problem people are approaching. Framework uses a prismatic battery, which is useful for energy density, but likely hard to source. The cylindrical batteries would solve that, but introduce a lot of compromises of their own with size and power management. Power management is the difference between your batteries lasting one year versus ten years, after all; I’d care less about hotswap battery bays if batteries last all day and cheap that lifespan for a decade.
You can’t beat physics
Memory bandwidth is one of the biggest impacts with computer performance; the faster you can run memory (bandwidth and latency), the faster your system will be, especially with an integrated design that involves more things using the memory bus. Signal integrity is one of the big things how fast you can memory, and slots and signals introduce a lot of complications there. It’s why soldering down the memory lets manufacturers run the memory at higher speeds. Even bus width can be impacted; you’d need a lot of slots to match the bandwidth possible with soldered-down LPDDR. I think it’s a fair compromise.
Some compromises are a little trickier; for example, Apple puts the NVMe controller on as part of the SoC, which gives it direct bus access for better performance and power management (i.e. idle). This does make traditional SSD replacement incompatible though, and NAND doesn’t have the same signal integrity issues. It’d be nice then for raw NAND on a card or a similar arrangement to keep the best of both worlds. I assume MacBooks have the volume to justify such an arrangement, after all. (That said, Apple does use good-quality NAND and a good controller, so they may outlast the useful life of the laptop.)
Points of failure
The parts to make swapping components out with other commodity components (like slots) can introduce a lot of problems themselves. In addition to the aforementioned signal integrity losses, there’s also mechanical failures. If the system is dropped, something is likely to go wrong with the slot than if it was soldered down. In addition, the slot itself can fail and need replacement. This isn’t theoretical; the ThinkPad T30 was plagued by memory slot failures.
More components in general is just more things to go wrong too. For example, dedicated GPUs are a significant common failure point on higher end laptops due to the thermal stress. The more modern laptops solder down, the less there is to break and need repair. But for integration in general..
The long march of VLSI
In the past, long before systems-on-a-chip, systems were made out of discrete logic. Eventually, entire boards got consolidated onto integrated circuits, and those further onto even smaller, denser ICs. No one complains that their ALUs, L2 cache, and northbridge is all on the same silicon, when it makes it far less likely to fail at the cost of in theory, less repairability.
Parts availability and manufacturer support
As a spicy introduction to this section, and a grounded one at that; I think the only phone you can buy to say you’ll use it until it breaks for the longest time possible is an iPhone. (Which is what I did; I bought an iPhone and intend to use it for a decade.) This is for a few reasons, but it’s almost always support.
The software support is the biggest reason. iPhones get years of security and feature updates. I can’t think of any Android phone that gets updates as long as an iPhone 6S. Software support is important if you want to be a good citizen on the web, and not some kind of malware superspreader. It’s not needed, but it means I can use something in good conscience.
The other is hardware support. Parts are made for a long time unlike most phones, and iPhones are optimized for repair in that the first two common service items (screen and battery) come out first. This is unlike a lot of phones where the screen is a part of the frame in ways that make it harder to open up. But I’m probably not going to open it up myself (last time I tried doing that to an old Xperia Play, the phone just developed a new problem instead – it wasn’t worth fixing anymore); Apple stores have pretty good parts and labour costs on the item I’d likely need to replace over the life of the phone – the battery. And I actually trust Apple to do a proper recall too.
Defective design (is it worth it to repair?)
If a design has a defect that needs repair, is it worthwhile to keep throwing good money after bad? Take the 2016 MacBook Pros, with the keyboard issues that makes that entire generation a bad vintage. Throwing keyboards at the problem won’t solve the fundamental issues with that design, which would justify switching over to something else, IMHO.
Even without problematic designs, this is something to keep in mind. Just because you can fix it, doesn’t mean it’s worthwhile. For a concrete example of one angle, think of the Italian luxury cars you can throw parts at.
The water issue
Waterproofing a device can make it harder to open (seals, gaskets, etc.), but massively reduce the risk that the phone needs to be fixed. I don’t think anyone complains about this.
Security against physical attackers
Some devices pair parts for security purposes. Considering state-level attackers are trying to go for phones, it seems reasonable to me. Everyone gets the benefit of a more secure device, and makes secure devices stick out like a sore thumb less. Of course, I think that for parts like the camera that are part of security-sensitive subsystems. For things like batteries without security repercussions, I don’t think it’s defensible.
The moving goalposts
Ultimately, I wonder if the age of slotted components is simply coming to an end, in the same way that the era of discrete components like flip-chip and SLT did. We’re going to have to get used to soldering irons, ovens, and SMD components. Hell, even “makers” have finally gotten used to SMT. Through-hole used to be the order of the day, now they’re getting used to finer and finer pitch components. Maybe soldered components on a laptop will sting less if people are expected to have irons.
2 thoughts on “Brief thoughts on right to repair issues people don’t think about”