• 0 Posts
  • 6 Comments
Joined 8 months ago
cake
Cake day: July 11th, 2025

help-circle

  • I agree, and there are a number of other biases to consider. Here’s some I can think of:

    • Firefox will mainly be running of desktops, laptops and smartphones. I would expect QA to be significantly better for this type of device than, say, consumer grade routers or TV boxes. But more concerning to me is stuff like cheap ATMs, industrial control systems (although Siemens have great QA) and elevator control systems etc. Infrastructure, not consumer toy, and Mozilla obviously aren’t the right people to say anything about the state of any of that.
    • While Mozilla is currently estimating approximately 200 million installs, some of those - especially on Linux - will have disabled telemetry. I know I do. With that said, I can’t recall the last time I had a FF CTD (crash to desktop) but I suspect when I did, it wasn’t even a bug but an OOM (out-of-memory) kill because I was browsing on something like a 2Gb RAM micro-portable with insufficient swap. FF is one impressively stable piece of software these days.
    • Firefox usage is not evenly globally distributed, and I have no way to reliably assess whether FF has a larger or smaller proportional usage in regions that may rely more on older or refurbished hardware, which I would expect to have higher HW error rates (although I cannot prove that either - I can’t find any good public aggregate data for RAM MTBF trends over time, but I’d be very interested if somebody else knows where to find authoritative answers on that).

    (Un)fortunately, this may be the most Mozilla can provide in terms on insight. Their users tend to be particularly sensitive of perceived or practical privacy violations, so I understand - and appreciate - their caution in gathering data.


  • Fair question. I find it unnerving, because there’s very little a software developer can meaningfully do if they cannot rely on the integrity of the hardware upon which their software is running, at least not without significant costs, and ultimately if the problem is bad enough even those would fail. This finding seems to indicate that a lot of hardware is much, much less reliable than I would have thought. I’ve written software for almost thirty years and across numerous platforms at this point, and the thought that I cannot assume a value stored in RAM to reliably retain it’s value fills me with the kind of dread I wouldn’t be able to explain to someone uninitiated without a major digression. Almost everything you do on any computing device - whether a server or a smart phone relies on the assumption of that kind of trust. And this seems to show that assumption is not merely flawed, but badly flawed.

    Suppose you were a car mechanic confronted with a survey that 10 percent of cars were leaking breaking fluid - or fuel. That might illustrate how this makes me feel.