With the days of dial-up and pitiful 2G data connections long behind most of us, it would seem tempting to stop caring about how much data an end-user is expected to suck down that big and wide broadband tube. This is a problem if your respective tube happens to be a thin straw and you’re located in a base somewhere in the Antarctic. Take it from [Paul Coldren], who was stationed at a number of Antarctic research stations as an IT specialist for a total of 14.5 months starting in August of 2022.
![Prepare for hours of pain and retrying downloads. (Credit: Paul Coldren]](https://hackaday.com/wp-content/uploads/2025/09/engineering-for-slow-internet-icon.png?w=400)
As [Paul] describes, the main access to the Internet at these bases is via satellite internet, which effectively are just relay stations. With over a thousand people at a station like McMurdo at certain parts of the season, internet bandwidth is a precious commodity and latency is understandably high.
This low bandwidth scenario led to highly aggravating scenarios, such as when a web app would time out on [Paul] while downloading a 20 MB JavaScript file, simply because things were going too slow. Upon timing out, it would wipe the cache, redirect to an error page and have [Paul] retry and retry to try to squeeze within the timeout window. Instead of just letting the download complete in ~15 minutes, it would take nearly half an hour this way, just so that [Paul] could send a few kB worth of text in a messaging app.
In addition to these artificial timeouts – despite continuing download progress – there’s also the issue of self-updating apps, with a downloader that does not allow you to schedule, pause, resume or do anything else that’d make downloading that massive update somewhat feasible. Another thing here is distributed downloads, such as when hundreds of people at said Antarctic station are all trying to update MacOS simultaneously. Here [Paul] ended up just – painfully and slowly – downloading the entire 12 GB MacOS ISO to distribute it across the station, but a Mac might still try to download a few GB of updates regardless.

This level of pain continued with smartphone updates, which do not generally allow one to update the phone’s OS from a local image, and in order to make a phone resume an update image download, [Paul] had to turn the phone off when internet connectivity dropped out – due to satellites going out of alignment – and turn it back on when connectivity was restored the next day.
Somewhat surprisingly, the Microsoft Office for Mac updater was an example of how to do it at least somewhat right; with the ability to pause and cancel, see the progress of the download and resumption of interrupted downloads without any fuss. Other than not having access to the underlying update file for download and distribution by e.g. Sneakernet, this was a pleasant experience alongside the many examples of modern-day hardware and software that just gave up and failed at the sight of internet speeds measured in kB/s.
Although [Paul] isn’t advocating that every developer should optimize their application and updater for the poor saps stuck on the equivalent of ISDN at a remote station or in a tub floating somewhere in the Earth’s oceans, he does insist that it would be nice if you could do something like send a brief text message via a messaging app without having to fight timeouts and other highly aggravating ‘features’.
Since [Paul] returned from his last deployment to the Antarctic in 2024 it appears that at least some of the stations have been upgraded to Starlink satellite internet, but this should not be taken as an excuse to not take his plea seriously.
Very nice, I love seeing optimisation (and accounts of frustration) related to low bandwidth internet connections. To salt the wound, this is also high latency! Low bandwidth but low latency connections are surprisingly usable, or at the very least not as infuriating to use.
I had to use a dialup network in the early 2010s for a few months due to some living circumstances. It taught me a lot (although the internet wasn’t littered with large JavaScript desktop apps yet). Browsing was only possible by blocking all javascript and images on webpages by default.
If anyone can get me onto a flight to Antarctica, I will be forever grateful! Its the only place in the entire world that I wish to visit as a tourist!
If I had low bandwidth but also low latency I’d rent a small VM on a cloud provider and browse via Chrome Remote Desktop or similar. But if I had low bandwidth and HIGH latency I might browse mainly through Carbonyl over SSH to that small VM. Carbonyl is a good middle ground between pure text terminal CLI web browsers such as elinks and a full desktop browser, and it embeds a Chromium browser for excellent rendering, just at a reduced resolution. Anything that doesn’t render well on that, for those pages I’d use a standard browser. Text is rendered perfectly; Graphics and video, blocky but usable.
https://github.com/fathyb/carbonyl
I agree. Why should a simple webpage be so huge. I loved when I had ISDN in the 90s when everyone else had dial up. When I used to write web pages I would target dialup for the simple reason. We didn’t want our customers to have to wait if they just had dial up.
Microsoft FrontPage even had a connection simulator.
You could select different speeds and then it would calculate how long the site would take to load.
Our IT teacher in school often used it and then had us pupil to compress pictures even more.
It was trying to work at the end of a slow and iffy broadband connection (rural uk) that forced me to dump Windows, with its refusal to work until had downloaded GBs of updates, in favour of (K)Ubuntu.
Unfortunately a few years later they then started forcing auto-updating snaps on their users.
Am now on KDE Debian, in control of my own computers, and happy. And about to get fibre to the premises after many years of broken promises and delays, but I won’t be going back.
As someone who’s had to deal with low speed internet access pretty much my entire life in one way or another, I’m grateful to see this article. The modern web is surprisingly user hostile to low speed and/or high latency connections. The real kicker is this is a totally solved problem. All the necessary design decisions to make downloading things work properly on such connections were all figured out during the early years of the internet. Web page designers just don’t bother to think about low speed connections. Having a download slowly and steadily download until it gets to 90% and then have it terminate the download because it was taking longer than some arbitrary amount is baffling. What does it accomplish? I’m then forced to restart the download over and over again until it finally comes down fast enough to beat the arbitrary limit. So instead of using x amount of bandwidth, I’m forced to use 8x amount of bandwidth or more.
As recently as 2018 I lived off of a 1Mbps Internet connection. That’s 128KBps.
Most services would work at that point but not well, and there was always the occasional website that was too poorly coded to work.
I think everyone who is building a website should be forced to test it over a connection like that before they call it finished.
You don’t even need a slow connection to test it. Web browser developer tools have the ability to simulate different types of connections.
Nah, some devs have to suffer to learn. Let them sit in front of some streetcurb potato someone in the third world would gladly give a liver for. Let them develop their webpage on dial-up. Give them 10 MByte for the whole website and forbid CDN usage, they are most times against the GDPR anyway. If they make their site accessible they might even get an upgrade to 768kbit DSL.
Maybe then they will finally learn how to develop for everyone.
People keep forgetting that limited resource computing is still useful, and how little actual value a lot of high-resource computing provides.
The old saying used to be “Intel giveth and Microsoft taketh away”: every time CPUs got faster, OSes and applications added enough chrome plating to bring performance of the same software back down to the same level as before. Today’s average computing device devotes more hardware than an early supercomputer to compositing graphics for the display.
Microcontrollers that cost less than a dollar have more computing resources than the machines that made desktop computing a thing. We were well into the ’90s before a whole Windows installation was larger than the 20MB Javascript file mentioned above.
Machines with kilohertz clock speeds, kilobytes of RAM, and 300K 5-1/4″ floppy disks had enough power to take over the world. Network connections over 9600 baud modems did it again. A disproportional amount of CPU and network development since then has gone to enabling people who can’t imagine giving up the nuances of their webpage’s 8MB background image.
More or less. An 386 CPU still is way more capable than ATtiny13 or ATMega328p.
Memory of a microcontroller chip is far less what a low-end DOS PC had in 1990 (2KB RAM vs 2MB RAM and up).
When many people think of a powerful “microcontroller”, then they mean an single-board computer (SBC).
An Raspberry Pi (any model) is a single-board computer rather than just a microcontroller.
A traditional microcontroller is something like an 8051.
Which also was used to build single-board computers with external storage/RAM and i/o ports.
Microcontrollers have come a long way since ATmega and 8051.
I have used NXP LPC55S69 (ca. 5€, EVA board ca. 50€) in a few projects. 150 MHz 32 bit ARM core with FPU, 320kB SRAM, 640kB flash.
Maybe. New microcontrollers with 8051 core are still made, though.
The architecture is far from obsolete.
In sub 1$ range we are already in ARM Cortex M0 territory – so already 32bits and over 40MHz. Not sure how it compares to i386 computing power though.
According to Google the ARM Cortex M0 has ~0,9 DMIPS per MHz.
A 33 MHz 386DX has 11,5 MIPS, according to Wikipedia.
Not bad for a CPU from 1985.
Also, comparing RISC vs CISC isn’t easy.
Real world performance might differ depending on application.
Another early 90s CPU, the 486DX @50Mhz, reachs 41 DMIPS.
It probably is a better match for the Cortex M0.
My point simply was that we tend to forget how capable technology was from decades ago.
It’s too easy to fall for the rose-colored “see how far we have come” point of view and underestimate our prior technology.
Oh, and in early 90s there were coin-sized 386/486 processors by the way.
The 386SL and similar low-power notebook processors were very small parts.
Anyway, just saying. By 386/486 many may think of the full-size ceramic processors.
Yep common in Alaska. Where I was for years internet plans charged by the GB as well as the speed and latency was always high.
Cellular roaming (eg US phone/plan in the EU) latency is always high since you have a US IP.
I have rural property in the northern Midwest where we will be building a house. We have cellular service there, but latency is what kills our internet access on the phones. We are in an area that gets a fair amount of weekend tourist traffic. Between Friday afternoon and Sunday evening, web pages time out, except sometimes overnight and early in the morning. This is probably due to the cell tower itself prioritizing phones that are closer to the tower based on signal time-of-flight (I have tried a powered antenna booster with little improvement). It even made my cellular weather station unusable after their provider changed the bands they used. I got about 3 months of a working connection until the change was made. Of course the weather station manufacturer doesn’t see this as their problem….
Sounds like a good use for a VSP. Let it do the heavy lifting and send the results it’s way. I seem to remember Opera doing something like this.
Meh. Just use edonkey.
20 megabyte? Should count himself lucky he didn’t have to deal with a Telerik-based web app.
Yup slow internet must be tough. We survived with phone patches and hf radio, 75 baud rtty… Uppersideband Ratt, Lowesideband voice.
This article is a must-read for all App and web developers. You will get more customers if you also cater to low bandwidth users. Once upon a time, the web worked acceptably well over slow internet connections. You could sometimes even select a low bandwidth option for (e.g. a news) sites that served mostly text. Then AJAX anllowed incrementally served content to become the norm, and advertisers started sending large payloads of inefficient JavaScript code. I block ads on my failing, Intel-based 2014 MacBook Pro, not because I hate ads, but because the ads routinely overheated my CPU and GPU. ABP fixed all that overnight, but now the ad-blocker detectors hassle me on a daily basis.