Last Easter, I was running a checkpoint at Imbil as I’ve done before… operating a checkpoint at Derrier Hill Grid with horses passing through from three different events simultaneously, coming from two different directions, and getting more confused than a moth in a light shop. At that time I thought it’d be really handy to have a software program that could “sort ’em all out”. I punch in the competitor numbers, it tells me what division they’re in and records the time… I then assign the check-points and update the paperwork.
We have such a program, a VisualBASIC 6 application written by one of the other amateurs, however I use Linux. My current tablet, a Panasonic FZ-G1 Mk1, won’t run any supported version of Windows well (Windows 10 on 4GB RAM is agonisingly slow… and it goes out of support in October anyway), but otherwise would be an ideal workhorse for this, if I could write a program.
So I rolled up my sleeves, and wrote a checkpoint reporting application. Java was used because then it can be used on Windows too, as well as any Linux distribution with OpenJDK’s JRE. I wanted a single “distribution package” that would run on any system with the appropriate runtime, that way I wouldn’t need a build environment for each OS/architecture I wanted to support.
One thing that troubled me through this process… was getting image resources working. I used the Netbeans IDE to try and make it easier for others to contribute later on if desired: it has a GUI form builder that can help with all the GUI creation boilerplate, and helps keep the project structure more-or-less resembling a “standard” structure. (This is something that Python’s tkinter seriously lacks: a RAD tool for producing the UIs. The author of the aforementioned VB6 software calls it “T-stinker”, and I find it hard to disagree!)
Netbeans defaults to using the Maven build system for Java projects, although Ant and Gradle are both supported as well. (Not sure which one of the three is “preferred”, I know Android often use Gradle… thoughts Java people?) It also supports adding bitmap resources to a project for things like icons. I used some icons from the GTK+ v3 (LGPLv2) and Gnome Adwaita Legacy (CCBYSA3) projects.
The problem I faced, was actually using them in the UI. I was getting a NullPointerException every time I tried setting one, and Netbeans’ documentation was no help at all. It just wasn’t finding the .png files no matter what I did:
2025-06-15T06:32:54.461Z [FINEST] com.vk4msl.checkpointreporter.ui.ReporterForm: Choose nav tree node test
Exception in thread "AWT-EventQueue-0" java.lang.NullPointerException
at javax.swing.ImageIcon.<init>(ImageIcon.java:217)
at com.vk4msl.checkpointreporter.ui.event.EventPanel.initComponents(EventPanel.java:239)
at com.vk4msl.checkpointreporter.ui.event.EventPanel.<init>(EventPanel.java:63)
at com.vk4msl.checkpointreporter.ui.ReporterForm.showEvent(ReporterForm.java:895)
at com.vk4msl.checkpointreporter.CheckpointReporter.showEntity(CheckpointReporter.java:532)
at com.vk4msl.checkpointreporter.ui.ReporterForm.navTreeValueChanged(ReporterForm.java:480)
at com.vk4msl.checkpointreporter.ui.ReporterForm.access$100(ReporterForm.java:70)
at com.vk4msl.checkpointreporter.ui.ReporterForm$2.valueChanged(ReporterForm.java:182)
Maybe it’s my search skills, or the degradation of search, but I could not put my finger on why it kept failing… the file was where it should be, the path in the code was correct according to the docs, why was it failing?
Turns out, when Maven does a build, it builds all the objects in a target/classes directory. When Netbeans runs your project, it does so out of that directory. Maven did not bother to copy the .png files across, because Netbeans never told it to.
I needed the following bit of code in my pom.xml file:
That tells Maven to pick up those .png files (in the com.vk4msl.checkpointreporter.ui.components.icons package) and put them, along with the README.md, in the staging directory for the application. Then Java would be able to find those resources, and they’d be in the .jar file in the right place.
Other suggestions have been to move the project to using Ant (which was the old way Java projects were built, but seems to be out of favour now?)… not sure if Gradle has this problem… maybe some people more familiar with Java’s build systems can comment. This is probably the most serious Java stuff I’ve done in the last 20 years.
I used Java because it produced a single platform-independent binary that could run anywhere with the appropriate runtime, and featured a runtime that had everything I needed in a format that was easy to pick back up. C# I’ve used for command-line applications at university, but I’ve never done anything with Windows Forms, so I’d have to learn that from scratch as well as wrestling MSBuild (yuck!). Python almost was it, but as I say, dealing with tkinter and trying to map that to all the TK docs out there that assume you’re using TCL, made it a nightmare to use. I didn’t want to bring in third-party libraries like Qt or wxWidgets as that’d complicate deployment, and other options like C++, Rust and Go all produce native binaries, meaning I’d have to compile for each platform. (Or force people to build it themselves.)
Java did the job nicely. Not the prettiest application, but the end result is I have a basic Java program, using the classical Swing UI interface that should be a big help at Southbrook later this month. I’ll probably build on this further, but this should go a big way to scratching the itch I had.
Today, I noticed I had an extremely large number of “new” users, about 206 accounts in total created, with an odd username pattern:
www.XXXXX.blogspot.YY - Z.ZZZ BINANCE
XXXXX was a randomly chosen set of letters, YY was one of many TLDs that Blogspot uses, and Z.ZZZ was some kind of price value. Clearly, spammers have found a new way to send spam: using the registration email that WordPress uses to confirm your email address is valid.
This blog has always allowed comments in one form or another. Originally I allowed anonymous user comments and ping-backs, but when those got abused, I disabled them, requiring that a user be registered with the site to comment.
That has been fine up until today. Some “BINANCE” arsehole decided that the username field was a perfect way to spray shite from one end of the Internet to the other. In total, 230 accounts were created in the past few hours, mostly to gmail.com email addresses.
Maybe the username field can be stripped on such emails so the only thing they can supply is the email address (basically making sending mass spam this way very difficult; they’d have to “encode” it in a sub-address … not all providers support this and they can do it different ways).
We’ll see what damage that’s done to the sender score on this site. I have a second route I can use for sending outbound email that’s got a clean reputation so not all is lost.
Comments for the time being will now be exclusively through ActivityPub.
This week just gone, we not only had a state election here in Queensland, with a conservative government (LNP) returned to power… we also witnessed the US hold their federal election, and vote in a conservative party there too.
In both regions, the term supposedly is 4 years. Well, here I expect the 4 year term will be upheld… this has never been an issue before and while I do remember a time when our state parliament sat for a 3 year term, it never has been infinite.
Over in the US, there were rumblings along the lines of “we’ll fix it so you don’t have to vote anymore”. Make of that what you will.
Next year, Australia goes into federal election mode… While our mainstream conservative parties are nowhere near as right-wing as some of the minor parties (looking at you, FFP, ONP and KAP), and state politics is theoretically separate from federal, hearing Queensland LNP front-benchers contradict then opposition leader (and now Queensland Premier) David Crisafulli on issues like abortion really does feel like they’re echoing the right-wing governments overseas.
Of course, it was one of the right-wing minority party leaders, KAP’s Robbie Katter, that happened to toss that grenade into the mix… nearly de-railed the train entirely for the LNP. I think therefore this is just a taste of what we can expect next year: minor parties (especially conservative ones) throwing their smoke bombs into the debate whenever possible on all kinds of issues… and some of them drawing “inspiration” from overseas.
The previous LNP government wound up being a disagreeable mob that argued with everyone, Campbell Newman (who we had as local member here in the seat of Ashgrove) and his government were tossed out at the very next opportunity. I think they may have learned from that and will pull their heads in a bit more… we’ll see.
Their big platform was on youth crime, and their big headline remedy, “adult crime: adult time”, has been criticised as not having solved the problem anywhere else where it was attempted. There’s apparently some early intervention to try and address issues before they boil-over into major societal problems… that should have a positive influence… the efficacy of punishments though will need to be proven.
Federally… we’ve got different challenges now. The US threatening new tariffs will not only push up inflation in the US (a nice foot-gun you’ve got there, Trump), it’ll also spell trouble for Australian exporters, notably our mining operations. The rhetoric during this year seems to spell trouble for both the NATO and AUKUS alliances. If the US pulls out of these, Australia will be very isolated as the UK is on the opposite side of the planet and (thanks to Brexit) a financial basket-case.
A regression in the situation in Ukraine will make things unstable in Europe generally, if that spills over, the fact that we’ve got the UK as an ally may be meaningless as there’s a lot of ocean to cover for their aid to reach us. Closer to us is China and Taiwan, which is quietly simmering away… whilst Israel wages war with both Palestine and Lebanon.
Donald Trump’s rhetoric over the previous term does not endear a world of peace. Some have praised his manner of speaking as being “refreshing”… well it most definitely is different. Diplomacy is a game of subtle nuance. Always has been. I’m not sure shouty-shouty megaphone diplomacy will work. It didn’t work that well for Germany in the 1930s, and many today draw parallels between that time, and today’s US. The last US presidential debate between Trump and Kamala Harris gave us a pretty good peek as what we will probably get. Malcolm Turnbull’s “robust discussion” with Trump back in 2016 suggests as much too. For a few weeks I had this cartoon living rent-free in my head…
“Debating an idiot is like playing CHESS with a PIGEON… He’ll just knock pieces over then claim he won!” — my take on the US Federal Election 2024 posted to Mastodon. Yes, I am terrible at drawing people, but that does not stop me from trying my hand anyway. At least I didn’t use an AI!
Well, world leaders and heads of state alike, will be debating the pigeon for another four years.
Next year, it’ll be our turn in Australia. Federal parties will need to balance the urban and rural needs: this is an area where Queensland’s parties failed. Labor did really well in urban seats, but failed miserably in the regions. A similar pattern was seen in the US election, with most of those in rural areas, preferring the Republican party, whilst in urban areas, Democrats were favoured.
A party should not be solely representing just the regions or just the urban centres. They are being elected to represent both. Cost of living is a big issue right now, something a third world war will not improve. A world war might mean we in Australia are isolated, and unable to import a lot of things, making every day things a lot more expensive. So encouraging local production and level-headed diplomacy will be critical.
Healthcare is a big issue in the regions, especially for specialist services. As it happens, our food and minerals do not come from the CBDs of capital cities — so we really do need to be helping out there to make life more viable. This means hospitals should be aiming to provide what their patients need, not inflicting restrictive guidelines on people who have few viable alternatives.
Climate change will affect us all, urban and rural… we can’t rely on digging up former dinosaurs to fuel everything long-term… we’ve left it a little late to be constructing big nuclear plants. While smaller options exist (small modular reactors are used quite successfully on submarines), a big honking reactor the size of Tarong is a biiiig risk in Australia’s climate.
Yes, Europe has lots of them, but Europe built nearly all those decades ago, when they were not getting massive wildfires and 40°C+ temperatures. A small reactor that we can shut down, crane onto the back of a truck, and shift out of harm’s way might be useful for propping up parts of the grid in times of need. A reactor that is too big to move is a major risk in a flood or bushfire emergency, and we have had a longer and more frequent history of these than any part of the world that currently uses nuclear. Fukushima, despite the low number of people killed as a result of the reactor (most people in that disaster lost lives due to the tsunami), is not a blueprint for how to build a large reactor in a risky area.
Battery technology isn’t ideal right now, not sure I like the idea of dealing with bushfires that are lithium-enhanced… but lithium batteries are not the only option out there for fixed installations. This blog runs on dated but still useful AGMs. There exist other storage technologies which could be viable at scale and should be considered. A former manager of mine was keen on the Zebra battery, which is a form of molten-salt battery. I couldn’t source one for him in 2008, we ended up going with LiFePO₄ cells… but there is probably wisdom in using a battery that is fine with heat.
We’re likely to see a big influx of migration over the next few years, as conflict and hatred makes the planet overall a more dangerous place. We’re hearing the phrase “your body, my choice” a lot now, a phrase no woman deserves to have levelled at them, women are more than just brood stock.
Increasingly some governments have shown transgender attitudes as well, a group that does not choose their condition any more than a baby born without eyes chooses to be blind. Everyone has challenges, and everyone deserves assistance with their challenges whatever those happen to be. We shouldn’t be discriminating against people on the basis of the (sometimes unique) challenges an individual might face.
Neurodivergence also seems to be in the cross-hairs: if that ever gets imported into Australia’s mainstream politics, yours truly will be in the cross-hairs here! I’ve faced discrimination before (looking at you Hilder Road & The Gap State Schools).
Lots of people from these marginal groups will be on the move escaping discrimination. We need to do our best to ensure the same hate movement does not rise here: these are people that have a lot to give if given the opportunity.
Circling back to health for a moment too… COVID-19 still rages on, there was a pleasing-looking trend last month in Queensland hospitalisation statistics showing COVID-19 and Influenza cases well down a month or two back. Not zero, they’re not gone… but not as bad as they once were. Sadly it won’t stay that way. Over in the US, they’re talking of giving the portfolio of health to Robert F. Kennedy Jr, someone who seems keen to continue grinding Andrew Wakefield’s axe, and seems to be very much against current preventative measures for containing contagious disease. With H5N1 (bird flu) rearing its ugly head, the never really dealt-with and worsening COVID-19 situation over there, and diseases we thought we had beat like polio making a come-back… we may see some particularly nasty bugs hit our shores. Get ready for Pandemic 2.0.
So a lot in store for the next 4 years at least… I think Europe is going to play a major role in the medium term. They are already showing a lot of leadership over technical standards (we can thank the EU for universal charges on portable devices for example). Whilst it’s not all good news there (end-to-end encryption being a controversial issue), on balance they seem to be headed in a better direction than the US is right now. Here’s hoping cooler heads prevail and things settle down, but right now I think we need to buckle up for a bumpy ride!
There’s been a lot of discussion about federal Labor’s plan to ban social media networks for people under the age of 16 years. This is actually been boiling for a little while now, but is getting full media attention now that the US election has finished.
It got me thinking though, what is a social media network? A website? A mobile phone application? Depending on your definition, you might be shocked to learn that the concept of a social media network has existed long before the existence of both these things.
Tom Standage published a book back in 1998 (yes, last century), “The Victorian Internet“, which discussed the development of telecommunications, from the early days of visual semaphores and banging pots and pans… through to the wired telegraph, and the parallels with the Internet we have today.
Of course this book being the late 90s, a time when the state-of-the-art feature was a mobile phone that had user-programmable ringtones and maybe could play Snake, the idea of this global network being accessible from a pocketable device seemed far fetched. This was just a year before the technology dead end called WAP.
If you consider a social network as being a place where people can exchange messages and ideas… then one might consider the “personal columns” of the local newspaper to be a very early form of social media network. It was a place where members of the public could write in, and have published (at the editors’ discretion) their letter for the readership to see. Sometimes the letters were public in nature, sometimes they were coded: providing a puzzle to challenge armchair cryptographers.
When telegraph networks started springing up, the need for telegraph operators grew, especially as these networks transitioned from being military networks to being a short message service for the general public. The operators themselves would go on to develop their own culture, parts of which survive today in the amateur radio world.
The invention of the telephone did eventually force the closure of the telegraph network, but it too, in its own way hailed the development of a social media network that over the course of the 20th century, would become a taken-for-granted fixture in most urban homes.
Many amateur radio enthusiasts are curious about electronics in general, and thus a good number of them became interested in the developing world that was the home computer. As the cost of components came down and parts became more integrated, both radio amateurs, and non-radio electronics enthusiasts alike would experiment with home computers.
Some amateur operators in the late 80s took old dial-up modems and modified them to connect to their radios, developing a protocol and standard we call “packet radio”. Others, would leave the modems as they were, write a program to answer the telephone and create a basic message board: the bulletin board system. People who were versed with both got the idea to make BBSes that faced both ways (packet radio and dial-up).
Some had access into an early cold-war era computer network system called ARPAnet (we now call this the Internet), and it too, had social network services of its own (e.g. Usenet), they would develop gateways that allowed their BBS users to interact on Usenet newsgroups. BBS software suites would also eventually develop peer-to-peer federation protocols like FIDOnet, enabling users of one BBS system to exchange messages with users of other BBSes.
The “normal” folk of the day often looked on such developments with disdain. Deriding the users as just boring “nerds”, the pioneers of this online social networking scene were often ostracised. These were people that often were socially awkward in real life. Today we might use the term “neurodivergent” to describe some of them. This online network provided an alternate reality, where your place in the world was judged on merit, what you knew … rather than physical attributes. A place where people could be themselves, and not get bullied about it.
The people that developed these systems though, would later go on to develop online communities for themselves on what was then a relatively new Internet-based service, the world wide web. MySpace, LiveJournal, Facebook, Twitter, the ActivityPub platforms (including Mastodon) and BlueSky… are all just further developments of the same ideas. Social networks thus, have a very long history.
Thus I come back to, what is a social network? Many of the things you can do on a site like Facebook or Twitter, can similarly be done lots of other ways, equally as effectively. MMS might be one of the last vestiges of the old WAP protocol, but it still lives on in modern mobile phones, and can send both text, pictures and video just as easily as posting to a web-based social media system like Facebook. In short, it is a social media network.
If the federal government wants to ban under 16s from using “social media”, they might as well create a time machine and zap our teens back to the 1700s, as it appears the only communication technology they’ll be able to legally use will all be inventions that were commonplace in that era.
This blog has never been on what I’d call, a high-performance server. In fact, things are a little on the slow side. I try to be frugal with my system resource allocation, with the assumption that my little site does not get a lot of traffic (much less since it’s no longer syndicated on Gentoo Planet). However, I think I managed to get the performance up a notch…
The site runs on my solar powered server cluster, with a couple of Ceph RBDs, one for the root OS and one for the data (MariaDB / www root / /home). The VM runs AlpineLinux. The VM host was over-provisioned with a larger SSD than required, allowing me to dedicate some space for local cache.
I had thought I could set something up that would organise the cache on the VM host, and abstract it from the VM, but so far, I’ve not gotten around to doing that. (I did have something sort-of working in OpenNebula with flashcache at work, but it was flaky.)
In libvirt, I provisioned a new RBD to serve as the backing store (thus keeping a pristine copy to roll back to should things go pear shaped), and a new LVM volume for the cache. For the time being, I moved the existing volume to be the last device. So I had:
/dev/vda: OS
/dev/vdb: Data volume
/dev/vdc: Cache volume
/dev/vdd: temporary Old /dev/vdb for data migration
Failed approaches
Firstly, what didn’t work for me, was bcachefs and bcache.
bcachefs
bcachefs wanted to fight me every step of the way, making formatting the volumes difficult with sketchy documentation (especially as I wanted a write-through cache to facilitate VM migration).
bcachefs format gives some very cryptic error messages, and has a somewhat quirky argument syntax for formatting. The command I figured out through trial-and-error was this:
The problem was convincing mount to actually mount it. I was supposed to specify every device, but each time it flatly refused, no matter what order I used, it told me “no such device”.
bcache
This is the underlying caching logic that bcachefs was built on, so I figured I’d try that. This worked better, but I found AlpineLinux had no real knowledge of bcache, and thus did not provide any means for me to bring up /dev/bcache0 before localmount mounted it.
I could have written a OpenRC init script to do this, but I wasn’t certain about this path, so decided to put the idea aside.
Winning approach: lvmcache
Luckily lvm2 has a built-in method: lvmcache. After installing the lvm2 package in AlpineLinux, I blatted the partition tables on my two virtual disks, formatted them as LVM physical volumes, and added them to a volume group.
~ # pvcreate /dev/vdb /dev/vdc
Physical volume "/dev/vdb" successfully created.
Physical volume "/dev/vdc" successfully created.
~ # vgcreate data /dev/vdb
Volume group "data" successfully created
~ # vgextend data /dev/vdc
Volume group "data" successfully extended
Now to create the logical volumes, first… I created the volumes themselves. This wound up being a little tricky because I wanted to use all the available space on each volume… I had tried specifying -L ${SZ}G but this ignored the fact that LVM uses a bit of header space on each physical volume. It complained, but in doing so, told me the size in extents that was available, so I was able to use -l ${SZ} to specify that number of extents:
~ # lvcreate --size 8G --name datavol data /dev/vdb
Insufficient free space: 2048 extents needed, but only 2047 available
~ # lvcreate -l 2047 --name datavol data /dev/vdb
Logical volume "datavol" created.
~ # lvcreate -n cachevol -l 4095 data /dev/vdc
Volume group "data" has insufficient free space (1023 extents): 4095 required.
~ # lvcreate -n cachevol -l 1023 data /dev/vdc
Logical volume "cachevol" created.
Now I had two separate LVM volumes, one on each physical device. Now to link them:
~ # lvconvert --type cache --cachevol cachevol data/datavol
Erase all existing data on data/cachevol? [y/n]: y
Logical volume data/datavol is now cached.
Great, except I forgot to specify the write mode. Turns out, this is a lvchange away:
I could now format /dev/data/datavol with a filesystem, and migrate the data across. rsync here we come. An update to /etc/fstab and we were in business.
So far, things seem to be more snappy, so we’ll keep an eye on things. It’s survived a couple of reboots, the question is what happens when I boost a post on Mastodon, does all the ActivityPub instances out there cause problems? Guess I’ll find out in a moment.
So, a few years ago, we said goodbye to the Brisbane institution that was Classic Hits 4KQ… with the decision of some Sydney bright sparks at Here, There & Everywhere, owners of Australian Radio Network, acquiring rival network Grant Broadcasting and merging the two together. This put ARN over their quota for the number of stations they were allowed to operate in the Brisbane region, and so they sold off their oldest to the Sports Entertainment Network.
That pretty much ended my all-day radio listening right there. The presenters from 4KQ’s morning show wound up on 4BC, but that’s about as much as was left of the old station. Well of course no good thing can last forever, and so Laurel, Gary & Mark broadcast their final show this morning. I will admit there was a bit of sadness as Greenday’s “Time Of Your Life” played (in place of Laurel’s Last Word)… then cron on my desktop PC muted qt-dab at 0900… it was time for me to get to work myself.
Peter Fegan takes over that shift on the 30th… and I for one, won’t be tuning in. 4BC now has nothing to offer me.
I had a quick tune around of the DAB+ and FM stations… to see what was on offer.
B105 is not worth bothering with, and haven’t been for the better part of 30 years now, such is the sad state of top-40 music these days. Sister station Triple M was a station I was listening to a lot at the early part of the century. Southern Cross Austereo runs both, as well as a stack of DAB+ only stations, however, the DAB+ stations are all at 32kbps. 48kbps is just tolerable for audio quality on the stereo here… and through headphones I can barely notice some compression distortion. 32kbps is okay for voice, but sounds terrible for music, ringing left-right and centre. So if I go to one of those stations, it’ll be via FM not DAB+.
In my hopping around, I came across one station that was broadcasting at 24kbps… through the speaker on my little portable DAB+ set I could hear the ringing artefacts. Not as bad as Coles TAS (at 16kbps!), but still bloody terrible! I didn’t realise there was a step between 16kbps and 32kbps… but I stand by my comment that 32kbps is the bare minimum for music… with 48kbps and above highly recommended.
4BC’s sister station 4BH actually adopted a lot of 4KQ’s old format… so for now I’ve set qt-dab up there. The main reason why I went to 4BC is the morning show, I had a choice to make: the people or the format… and so I decided to stick with “the people” for the morning, figuring I can fill the rest of my day with my own music. That’s what I’ve been doing the past couple of years. With “the people” gone, I’m now left with “the format” as the deciding factor. As for audio quality, they broadcast on MW at 1116kHz or DAB+ multiplex 9B at 96kbps — one of the highest bitrate commercial DAB+ streams.
Best case scenario I guess is the intrepid trio from 4KQ pop up here… seems this is the most likely place they’d appear. That said, this is quite likely it for a radio show that’s been going for 30+ years. End of the road. While Laurel and Mark are nowhere near retirement age, I would not be surprised at all if Gary decided to hang his headphones up and officially retire, being the oldest of the three.
I’ll admit I’ve learned a lot about the music I listen to through this show. The trio were enlightening and entertaining in equal measure. I guess time will tell as to whether this really is the end, or just a change of venue.
As for me with channel hopping… I’ve flirted with the idea of starting up a station of my own, but really this is “pipe dream” material. I’d need to team up with someone who knew the media business and could serve to run the organisation, whilst I’d be focussing on the technical matters of getting things on-air. Even there, I think I’d be serving beneath someone more senior — I’d be the apprentice broadcast engineer basically. Given the deep pool of people a prospective station general manager would have to choose from, I think there’d be a lot of competition. This is a job where you measure your success by the number of knives in your back!
Worst case… I have my music, I might tune in to a news broadcast occasionally, but otherwise my radio listening days may have finally ended. We shall see.
So, it’s political season again, and here in Queensland we made the rather foolhardy decision to run our State election roughly a month or so away from the US Federal election.
Foolhardy because the media is too busy guffawing over the allegation of cats and dogs being “eaten” in a debate for an election that people like myself have no say over (and should not have any say over), to properly cover the election that is actually mandatory for people like myself to participate in.
But I digress… in amongst all the comments, there was a post made just recently by Taylor Swift. Now, I know she’s a very successful singer, not that I can name any of her songs (my tastes are for older fare). But, in a recent post, she made a very valid point. Here’s the post (transcribed from a screenshot) in full:
Like many of you, I watched the debate tonight. If you haven’t already, now is a great time to do your research on the issues at hand and the stances these candidates take on the topics that matter to you the most. As a voter, I make sure to watch and read everything I can about their proposed policies and plans for this country.
Recently I was made aware that AI of ‘me’ falsely endorsing Donald Trump’s presidential run was posted to his site. It really conjured up my fears around AI, and the dangers of spreading misinformation. It brought me to the conclusion that I need to be very transparent about my actual plans for this election as a voter. The simplest way to combat misinformation is with the truth.
I will be casting my vote for Kamala Harris and Tim Walz in the 2024 Presidential Election. I’m voting for @kamalaharris because she fights for the rights and causes I believe need a warrior to champion them. I think she is a steady-handed, gifted leader and I believe we can accomplish so much more in this country if we are lead by calm and not chaos. I was so heartened and impressed by her selection of running mate @timwalz, who has been standing up for LGBTQ+ rights, IVF, and a woman’s right to her own body for decades.
I’ve done my research, and I’ve made my choice. Your research is all yours to do, and the choice is yours to make. I also want to say, especially for first time voters: Remember that in order to vote, you have to be registered! I also find it’s much easier to vote early. I’ll link where to register and find early voting dates and info in my story.
With love and hope,
Taylor Swift Childless Cat Lady
Now, I’ll set aside her endorsement for the US Democrats party. As she points out, she personally looked into the policies of the parties, and came to that conclusion. It’s the approach here that I want to highlight, and it’s as valid in the US with its esoteric election system as it is here in Australia with our (admittedly partially flawed, but generally highly regarded) preferential system.
I won’t be sharing who I’ll be voting for in the Queensland state election, as to be perfectly honest, I haven’t actually done my homework on that matter yet. Same goes for next year’s federal election.
The take-away I observe is the remark: “If you haven’t already, now is a great time to do your research on the issues at hand and the stances these candidates take on the topics that matter to you the most.“
She’s not saying “vote blue because I did”, she’s saying to go do your homework, and figure out how you will vote. “Your research is all yours to do, and the choice is yours to make.“
Here in Queensland, where I am falls under the seat of “Cooper” (formerly “Ashgrove”). We currently have a Labor candidate, Jonty Bush. From what I understand, she’s done a reasonable job and I don’t have a problem with her being voted back in, but I really need to figure out where she’ll sit in my ballot paper. Probably not the top, I usually like to reserve the top spaces for smaller parties, but as I say, I haven’t researched the matter much at all right now, so this is all subject to change, and nothing actually requires me to publish what I’m going to do in any case.
Labor recently started a trial of 50c public transport fares… with a view of encouraging their use. I note that the Greens have a policy that goes further: abolishing fares altogether long before the trial began, so I think we see where Labor’s policy came from. Polling seems to suggest Labor may be on the way out anyway. (And I remember the mess that the LNP made last time! We wound up with Campbell Newman as our local member.)
I’ll have to dig around and see what other policies the parties have, but this is an example of just one issue that one might consider. I strongly urge people to consider more than one issue! You might not care about public transport, but care greatly about mining activities: maybe you’re a voter who has lots of shares in mines, LNP seems to be cosying up with the miners too. That’s up to you.
My approach has been to start with all parties with a score of 0; and awarding points or deducting points, depending on how I feel about each policy they publish. I then use those “scores” to figure out my preferences. Try to ignore “who” made the policy, and just consider the policy’s content directly. (It’s a pity they don’t offer the policy docs in just plain text instead of a PDF or office suite file.) You can decide for yourself if you do this, or something completely different.
Crucially, there might be some hot-button issue where none of the parties agree on your position. The Israel-Palestine war is a good example, where I’ve heard people in the US say (effectively): “I’m going to vote Republican because the Democrats won’t stop arming Israel!” I’ve got bad news for you, the other side isn’t about to stop the flow of weapons either (quite the opposite in fact), it may be prudent to put that issue aside and focus on everything else for now. Get the “least worst” candidate in, then work with them on the issues that you had to set aside. Some will listen, some won’t.
Anyway, this is just my thoughts on the matter. No matter what part of the world you’re in, the next few years are going to be “interesting” to say the least. We’re already seeing what happens when someone can summon up a supercomputing data-centre to conjure up synthetic photos from text prompts, and I’d be living in a fantasy land if I were to try and make out all users of such systems were benevolent. Thus it’s incumbent on us to seek the source for policy research, go to the party’s website and look there. Don’t believe everything you see on social media, cats and dogs are not being served up on Ohio dinner tables, and not every “policy” published there will be authentic.
For us in Queensland, the Electoral Commission Queensland website would be a good starting point. The ABC will also publish details on their Elections page closer to the date.
Slow-scan television is a means of transmitting still images over HF radio. It has its origins in the 1950s, where operators would use a CRT monitor with a long-sustain phosphor in a darkened room to see the image, but modern implementations today use digital signal processing computer software on a conventional OS like Windows or Linux.
For the non-radio folk, you can think of it as a radio-flavoured facsimile service. A picture is taken, scanned (or created directly on the computer), the resulting image file is then encoded as a series of audio tones that are then transmitted over the air. The vast majority of standards out there use frequency modulation (FM) for the actual base-band signal sent to the radio, although there are some newer ones (like the Digital Radio Mondiale-based HamDRM, of which EasyPAL is probably its best known implementation) that use more advanced modulation techniques.
I would like the ability to send and receive SSTV on the bicycle. The Raspberry Pi 4 I have earmarked for the bike can do it, and I’ve been using it here with QSSTV to send and receive pictures as a home station. However, while this is a very capable client, being a desktop application, it is awkward to use on a headless Raspberry Pi. Best solution I’ve come up with is to use a VNC desktop, which is tedious.
I looked around for what else there was, and the answer was, not a lot. So I’d have to get my hands dirty.
Transmitting
Transmitting actually isn’t that difficult, there exist several CLI programs that will take an image file, and spit out an audio file (usually Microsoft RIFF .wav) that I can then play out the sound card. Not sure if they do the FSK ID at the end, but that’s not exactly difficult to figure out, and I can probably modify a tool to add it.
In my early experiments with SSTV, this is exactly how I sent some of my early transmissions. But I’d need to fire up an image editor to compose the image — it was cumbersome. However with the raster image already in-hand, it wasn’t too bad.
Keying the radio is done simply, I have two ways actually… one is to use the “Computer Aided Transceiver” port on my radio (Yaesu FT-897D in this case) to send PTT on/off commands. The other was via the GPIO pins on the Pi, which on the NWDR DRAWS board, are mapped to the appropriate pin on the data port. Both work, and are relatively straightforward.
So whilst there’s a few loose ends to tie up here, it seems that is largely handled.
Receiving
This was a big problem, and one I was stuck on for a while. Nearly everything I could find was a GUI application. I didn’t want to run a GUI for this, I wanted a CLI tool that sat in the background and did its thing, ideally launched at boot by systemd. I wanted a few things:
capture of metadata parameters into a separate file
periodic update of the partial image during reception
execute a script when something happens
I found one CLI receiver, there’s this one, but it assumes I have a recording of it already. I’d need something to spot the VIS header then start recording. QSSTV did nearly everything I wanted, but was a GUI application, couldn’t write out the metadata nor could it trigger a script. slowrx was the same.
I do note that the KiwiSDR seems to use slowrx in its decoder, and they’ve managed to solve this issue, so I took a closer look. With some work, I was able to split slowrx into two parts, a back-end I called libslowrx.a, and a front-end slowrx application. I then wrote a crude slowrxd application that ran on the CLI and used libgd to write image files (I did think of using libpng directly, but its API is fiddly to use). I started this a few days back and polished it off over the week-end. This is released on Github.
As a proof-of-concept, I threw together this SSTV cam website. There, I took inspiration from VK7OO’s SSTV site. I don’t have an SDR, but that doesn’t mean I can’t create a waterfall of the transmission, I made slowrxd write the transmission to a file, then used good ol’e sox to generate a spectrogram.
I made slowrxd generate a NDJSON log file as it received the image, which meant all kinds of metadata could be emitted live as it happened, including the SSTV mode at the start and the FSK ID at the end, timestamped to the millisecond. This could be parsed with sed and jq to pull out the necessary bits.
file could tell me how big the image was (I guess I could have put that in the log), and netpbm has lots of tools for putting images together. Drawing text was more tricky, but I found convert (from ImageMagick) could give me an image that had the text I wanted in a font I liked, then I could tell netpbm‘s pamcat and pamcomp could put the pieces together.
bash ran the whole show, generated the HTML, then used rsync to upload over SSH. Easy.
Without much work, I could replace this script with another that just does a nc -u localhost port and sends me a single UDP message with the event details so it could be handled by a separate daemon. So I think now I have a solution for that now.
Templating
This is a sore point in QSSTV, it has a template editor, but it uses its own format that’s basically proprietary to QSSTV using Qt abstractions. The code is open-source, so we can pick that apart to figure out exactly what its reading/writing, but it’d be a longer project… and the QSSTV templates are pretty limited.
The more I thought about it, the more I started to think about SVG as a template format — it has everything I need and some. I could use a well-known and highly capable editor like Inkscape to design the templates, leaving fields for my application to fill-in, then use a SVG rasteriser to spit out the image at any resolution I wanted.
For a rasteriser, whilst not ideal, I can use Inkscape itself quite successfully. At the cost of dragging in GTK+ libs (and it complaining that DISPLAY not being set), I can rasterise a SVG with one command:
inkscape -o output.png output.svg
The catch was, how do you do templates in Inkscape? Initial research was disheartening, but then I wondered… SVG supports CSS just like HTML. While you can attach style attributes to a CSS class, you don’t have to. SVG doesn’t care if you don’t… and with that, I can identify nodes in the SVG document by the class assigned.
Inkscape makes this possible through the Selectors and CSS palette accessed through the Object menu. Also useful here is the Object Properties palette available in the same menu.
Inkscape Template workflow
To create a template in Inkscape… firstly, create your image as you normally would. Put whatever dummy text in your text fields and dummy images in embedded bitmaps you need to change from code later. Get it looking the way you want with that dummy text.
Next, use the Object Properties palette to assign meaningful values to the objects you want to manipulate. Each ID you assign must be unique, so this by itself isn’t useful for fields that are duplicated, but will help later when you come to assign CSS classes to these fields. The IDs should contain only alpha-numeric characters (0-9, a-z, A-Z), hyphens (-) and underscores (_) (apparently other characters are allowed except white-space, but sticking to this limit will make life a lot easier).
Finally, we use the Selectors and CSS palette to assign classes. Click on an element you wish to assign classes to:
You’ll see there are two panes, either side-by-side or one above the other, the little button group down the bottom-right switches the view. Depending on which one is selected, you want either the bottom one or the right-hand one.
On the same row is + and - buttons for adding new “CSS selectors”. When you click + a pop-up box appears with a default being the ID of the object prefixed with a #. Delete the default text, then type in a . followed by the CSS class name, again just stick to alpha-numerics, hyphens and underscores… don’t try anything fancy. Then click OK.
You should find in the right (or bottom) pane, you now have the class defined, and your object (identified by the ID you assigned earlier) is listed as one of the members of that class. There’s a + icon beside the class which you can click to add another selected object to the same class, and a garbage can icon beside each member to remove that member from the class.
In the left (upper) pane, you can set style information for this class as well as for the specific selected object. Leave the style information blank for each class (you can apply styles if you like, but we’re simply using the class name as a label here to convey semantic information).
When you’re happy, save the SVG image. Inkscape SVG is fine unless you’re using something different for a rasteriser that requires standard SVG.
Editing the template is done by simply opening the SVG file in Inkscape and editing it like any other document, keeping in mind those two special panes for assigning CSS classes.
Python-based Templating engine
I did a quick experiment today, and came up with this very crude Python script to demonstrate a templating engine:
#!/usr/bin/env python3
from xml.etree import ElementTree
from collections import namedtuple
import datetime
SVGField = namedtuple("SVGField", ["element", "classes"])
svgdoc = ElementTree.parse("test.svg")
svgroot = svgdoc.getroot()
if svgroot.tag == "svg":
namespaces = None
else:
assert svgroot.tag.startswith("{") and svgroot.tag.endswith(
"}svg"
), "Root element is not a SVG tag"
# Pick out the namespace URI
namespaces = dict(svg=svgroot.tag[1:-4])
# Figure out what the text and image tags will be called, they'll have
# the same namespace as the `<svg>` tag.
TEXT_TAG = "{%s}text" % namespaces["svg"]
IMAGE_TAG = "{%s}image" % namespaces["svg"]
# Figure out all classes defined and the elements using them
classnames = {}
for elem in svgdoc.iterfind(".//*[@class]", namespaces=namespaces):
elem_classes = set(
[cls for cls in elem.attrib.get("class", "").split(" ") if cls]
)
field = SVGField(elem, elem_classes)
for cls in elem_classes:
classnames.setdefault(cls, []).append(field)
print("All classes: %s" % ", ".join(sorted(classnames.keys())))
NOW = datetime.datetime.utcnow()
# These would be sourced from command line arguments or some file, the
# following shows how to replace the content of text fields.
for cls, fields in classnames.items():
if cls == "freq":
value = "123456.789 kHz"
elif cls == "sstvmode":
value = "Dummy SSTV Mode"
elif cls == "utcdate":
value = NOW.strftime("%Y-%m-%d")
elif cls == "utctime":
value = NOW.strftime("%H:%Mz")
elif cls == "heading":
value = "Test heading"
elif cls == "message":
value = "Test message text"
else:
print("Skip fields of class %s" % cls)
continue
for field in fields:
if field.element.tag == TEXT_TAG:
for tspan in field.element.iterfind(
"./svg:tspan", namespaces=namespaces
):
tspan.text = str(value)
elif field.element.tag == IMAGE_TAG:
# Iterate over the attributes and look for the href tag, which
# may be prefixed by a namespace. Make a note of the new value.
changes = {}
for attr in field.element.attrib.keys():
if attr == "href" or (
attr.startswith("{") and attr.endswith("}href")
):
changes[attr] = str(value)
if changes:
# Apply the changes
field.element.attrib.update(changes)
svgdoc.write("output.svg")
If I run this script against my mock template… I get an SVG that renders to this:
Next steps
What I need to do next I guess is to write a web-based front-end that I can use with my tablet, but the mechanics of what I need is all there now. I can design templates for simple messages or embedding a copy of the image another station sent me so I can reply to their transmission. I can render those templates to raster images for transmission. I can capture and display the incoming SSTV traffic.
So, we’ve rolled around into a new year… and being often the season for get-togethers, we often find ourselves sharing more than just food, gifts and company. For some of us, it’s also the trade of unintended gifts in the form of infectious disease.
Thus far, I’ve avoided a second bout, once was enough! The evidence at this stage suggests the risk of long-term effects from this condition compound each-time you get it. Compounding interest on a term deposit is a good thing… compounding medical conditions from a virus is anything but!
I loathe wearing masks, and back when restrictions were finally lifted, I was glad to put mine on the shelf and leave it be. However, back then we had >80% up-to-date vaccination status, Omicron COVID-19 was relatively new, we had booster shots that knew about this variant. It “felt” relatively safe to do this.
I kept my shots up since my bout. While I hate needles, I hate disease more. I was doing them on a 6-month cadence, but when I enquired in October last year whether I should get another, I was told that since I’m not “vulnerable”, I should wait it out until 12 months. That was before JN.1 knocked on Brisbane’s door.
The Christmas period the last few years have been a period where cases spiked, and the 2023 Christmas period is no different: except this time around, we had two variants vying for attention: XBB 1.5 and JN.1. JN.1 has been marked a variant of concern by the WHO. This latter one is becoming dominant here in Brisbane and has prolonged the “tail” of Christmas cases we’ve become accustomed to.
So far, I’ve dodged it. Am I merely asymptomatic? That’s hard to know. RAT tests are ineffective at detecting these without actual symptoms present, and the “gold standard” PCR tests are not readily available.
My frustration is the lack of clear information as to what’s going on.
Community monitoring
Some people I’d like to call out in particular, which have helped plug a gaping hole in reporting coverage, and are helping to make things a lot better…
“Dennis – The COVID info guy” has been doing a fantastic job monitoring the media and collating COVID-19 related articles from across the globe as well as domestically. Most media outlets have stopped reporting on this condition since it’s no longer “novel”, it’s all too easy for news on this condition to fly under the radar.
Similarly, Mike Honey has been doing a brilliant job locating the raw data sets and providing great visualisations of that data.
Both these people have been instrumental for surfacing information that otherwise might be difficult or impossible to find any other way now that we don’t have regular media updates from the respective state governments any more.
They both post to the auscovid19 group. If you’re on the Fediverse (e.g. Mastodon), follow @[email protected] and you’ll see posts from both (among others). I highly recommend this group.
That said, the work these two and others do, is somewhat hobbled by the lacklustre reporting from today’s state governments.
Status reporting
Rewind back 2 years ago, we had very clear tracking of two factors to the general public:
the number of cases, detected, hospitalised and in ICU, from week to week, for each area
the number of people vaccinated, and to what level
Admittedly, this was at a time when 3 shots was the most anyone had unless they had special consideration. These days, the better approach is to just consider whether someone is “up to date”. For most people, that is “a shot in the last 12 months”, or “a shot in the last 6 months” for “vulnerable” people.
We also had week-by-week snapshot of case numbers, and in many cases, waste-water testing data.
This has all been almost completely abandoned. Queensland Health gives monthly stats if any. I feel given how fast this virus moves, and how mobile we are now, this was a hideously naïve decision.
Admittedly case numbers require people to report cases (either through their doctor or directly), but vaccinations, that is data that could be automatically collated and produced. We don’t need to name-and-shame people who are not up-to-date… but a break-down of people who had a shot “within the last 6 months”, “within the last 12 months”, “12 or more months ago” and “never” for each local government area could be a great start!
Waste water testing also is a pretty good proxy for individual case numbers. It’d be worth seeing that published again.
It was nice to see it all broken down for us, but even just having the raw data would allow those of us in the community who have the tools and expertise to crunch the numbers, and allow us to “do our own risk assessment”.
Mask requirements
I hate the idea of going back to needing them, but it seems we dropped restrictions way too early. Dropping restrictions really needed to happen after another crucial step: retrofitting of buildings’ HVAC systems to ensure they properly “scrub” the air.
This requirement was hinted at years ago with bushfire smoke permeating through buildings and triggering smoke alarms. When COVID-19 first showed up, we thought it was “droplet” spread, hence the insistence of “social distancing” (1.5m or more), keeping surfaces sanitised, and any kind of mask you could get your hands on.
Now, we understand its aerosol spread, which spreads through buildings just like smoke does. It hangs in the air just like smoke does. It can hang in the air for hours, and a slight draft can spread it from one end of the building to the other. 1.5m separation and clean bench-tops are meaningless.
There’s also a call to move to KF-94 or better (N95/P2 or N100/P3) masks as opposed to crappy droplet masks, ideally ones that filter both ways (inhaled and exhaled). Ocular transmission has also been observed — a face shield or glasses are sufficient protection for most people, but there’s still a small risk there. Aerosol spread though, requires you have something that properly seals and filters down to PM2.5 particle levels.
Here in Queensland, it’s up to the individual business what they allow. My local doctor actually requires masks before entry, but is seemingly not fussy about what ones you choose.
A good thing in some ways, because valve-less ones really do not agree with me: I’ve tried them before many times and found I couldn’t stand wearing one for more than a few minutes… if I force myself to wear one for longer, I find the constant re-breathing of my own breath causes me to become light-headded at first, and later come down with cold-like symptoms.
That said, when I was doing demolition work for HSBNE, we had vented P2/N95 masks. Those gave me no problems, and in theory, I could use those. The catch being, these work only for stopping you breathing in COVID-19 particles while you are wearing one. They do nothing about what you breathe out. They’ll work just fine if you can keep wearing one 100% of the time — but no one truly can. You have to eat and drink at some point, you’ll need to clean your teeth, you might need to show your full face to someone for identification… you may even need someone looking in your mouth for medical care. The moment you do, you can be exposed, develop an infection, then from that point, nothing is “containing” the viral load you are shedding through exhaled breath.
I actually spotted a bargain a couple of years back: a full-face elastomeric with P3/N100 filters going cheap in a clearance. When I got it, I tried it on and was instantly amazed, this thing was easier to breath in than anything I’ve owned or used before. This though, has the same problem out-of-the-box as the N95s we were using at HSBNE: it has an unfiltered exhalation port. Unlike those masks though, which were single-use disposable types, this one could (unofficially at least) be retrofitted:
The model I bought also had another trick: it could accept an air hose from a PAPR set-up, which was my next logical move if this mask didn’t work. (That said, a PAPR kit with hood is >$2000… vs $200 for this model.) I haven’t yet needed this, but it’s a welcome feature.
That filter mod was reversible, so a good option if you use the mask for work purposes and need to keep things “stock”. I found though, I could “optimise” things a little by drilling some holes to allow better airflow through the makeshift exhalation filter.
This mod, although not reversible, did not compromise the filtering ability of the mask since it was simply adding more holes to an outlet grille that protected the exhalation valve from object ingress.
I’ve since ordered a second mask which I’ll leave unmodified, to use in cases where the mod is seen as unacceptable or to replace the first mask if it becomes damaged.
I nearly considered a half-face model, but these seem to be harder to modify with exhalation filtering. I also have a decent stash of filters that fit the existing mask — it was cheaper to buy the full-face (which is still being sold at a clearance price) than a compatible half-face mask.
What I think may happen
We’re seeing a perfect storm of three things coming together:
apparent lapsing vaccination status for a majority of the population
a lack of official monitoring data
more and more infectious strains, some of which are able to skip vaccine immunity and simultaneously cause more serious infections
What the northern hemisphere cops in their winter, we normally see in our winter period 6 month later. Not that COVID-19 is seasonal: it isn’t. However, lots of other diseases are, and these in combination with COVID-19, mean we’ll likely be in for a doozy of a winter!
The Queensland Government has not said they’d go back to lock-downs or mask mandates like the “bad old days”. In fact, they’ve so far been saying the opposite. However, if the little data they’re still collecting suggested such measures were still required, I would not at all be surprised if they back-flipped on one or both of these areas. It is a big reason why I refuse to go interstate at the moment — the fear of being locked out of home!
My feelings on this
One thing that frustrates me is the lack of official guidance coupled with the lack of data. There’s no guidance from the Queensland Government suggesting what should happen, so everyone makes their own rules. There’s no data, so those decisions all seem arbitrary.
And when you do make a decision unilaterally, it seems no matter which way you go, it’s wrong. Earlier in the pandemic, I tried to lock down and isolate as much as possible — my reasoning if masks were good, not being there was platinum standard for avoiding disease spread.
Some insist this is the right thing to keep doing. Don’t go out, stay home, work from home, and mask up everywhere.
It is not lost on me that I co-habit with someone that is in the “vulnerable” group (in this case: over the age of 60)… so I do need to be at least a little careful.
That said, I find myself pulled towards social outings where a mask would be highly awkward or unworkable, often by family members who are in this “vulnerable” group. Refusing that seems like the wrong thing to do as well.
I almost feel like I’m being unintentionally gaslit from both sides. My instinct is to try and “blend in”: that comes from decades of trying to “mask” my Asperger’s Syndrome… doing something different to everyone else flies in the face of that no matter how good a reason you have to do so.
What I wish governments would do
Resume more regular reporting of data
We don’t need weekly “front the media” discussions, but there should be a data feed that we can all access, that can be used to feed into community-driven dashboards so people can at least come up with a semi-informed decision on what we should do as individuals.
This should include both the current circulating respiratory diseases (influenza, RSV, etc as well as COVID-19), and vaccination status against each.
Instigate and enforce standards for air quality
We now know these diseases spread through aerosol transmission. We also know other environmental threats like dust storms and bush fires can wreak havoc on our urban buildings.
Masks work, but they’re not practical 100% of the time. It has been found by judicious use of air purifying devices and retrofits to HVAC systems, dramatic improvements to respiratory health can be achieved. This needs to be better studied, with minimum standards devised.
With that in hand, building owners should be first encouraged (through grants or other means) to apply this knowledge to assess how their buildings fare, and fix any problems identified. Later, enforcement can be applied to catch up with the stragglers. Clean disease-free air should be a right not a privilege!
What I intend to do
Sadly, masking is not going away any time soon.
I don’t know how I’ll manage at the dentist — COVID-19 will not avoid a potential host because they happen to be occupying the dental chair, and if it’s unsafe to sit in the waiting room unmasked, it’s equally unsafe in the dentist’s chair!
Right now, with days exceeding 30°C and 75% RH, it’s too hot to be wearing one of these masks all the time unless you absolutely have to. It’s also hard to communicate in a mask.
At work
How my workplace would react to me wearing one is a complete unknown. I previously have avoided infection by staying at home… last year I was in a hybrid arrangement, working at home 4 days a week, and one day a week in the office. This has changed to a 2:3 ratio (2 days at home, 3 in the office). It’s an open-plan office in a shared building, with the bottom floor being some medical facilities.
I can work out on the back deck, and have done so… I might be doing that more when it is fine since the risk of transmission outside is far lower. I’d rather work from home if risks are high, but there may be some days where this is unavoidable. I dodged a few bullets late last year.
If there’s a building-wide or workplace-wide mask mandate, there’s no decision to make — it’s either work from home or mask-up. If lots of people begin calling in sick, I guess I’ll have to make that assessment on the day.
Dining out
It’s common for my father and I to dine out… there are four regular places we go to (in Ashgrove):
Taj Bengal: No option to dine outside, but usually this place is quiet on Mondays/Tuesdays… dining on these days should be relatively “low risk”
Cafe Tutto: There’s an outside dining area which is often more pleasant than sitting inside, we also dine there on quieter days. Decent airflow, often quiet, low-risk.
Osaka: Indoor dine-in only, probably the riskiest place as it can get quite popular, but usually things have been quiet, so it hasn’t been a problem.
Smokey Joe’s Pizza: Outside dining only, whilst I’d like to see the overhead fans pushing a little more air, things are relatively open and I don’t feel much threat dining here. We try to get there early because it gets busy later in the evening.
Concerts
My mother and I have been resuming going to concerts… so far, they’ve all been open-air affairs. The most “enclosed” one being Sir Paul McCartney’s “Got Back” concert at Lang Park in November last year.
I’d be very surprised if someone didn’t have some viral load there, but the open-air format means there’s not much chance for diseased aerosols to hang around.
That said, there’s security to get past. They need to identify you, and how they’d react to such a mask is a complete unknown, they may consider it excessive. I think if we are spread out enough, and there’s a decent breeze keeping the air moving, we should be safe enough without.
Shopping
Not that I do a lot, but right now I “read the room”… lately I’ve seen more and more people masking however. This is one area where masking is actually more practical.
I think this year I should try to make an effort in this area, as I have little excuse to do otherwise.
Radio comms exercises
This is an area where I really do need to be able to communicate clearly. I’ll have the mask with me, but it’ll depend on what I’m doing at the time. If I’m operating a station, the mask may need to stay off in order for me to do my job properly.
Outdoors
This is where I feel there’s the least risk. We’ll see how this winter plays out. I might mask-up for the hell of it if it gets cold enough.
Exhalation valve filtering
In cases where I do decide to mask-up… exhalation valve filtering will be predicated on a couple of factors:
Is there a mask mandate in place requiring this to be filtered? If yes, I’ll put the filter in.
Am I in a medical, disability care or aged-care facility? Again, in goes the filter.
Is this otherwise a unilateral decision in a medium or low-risk situation? I might not bother, it’s only a quick moment to slip it in if required.
Vaccination
My next shot is due in May. Around late March, I’ll have to put a booking in. Doing this is probably the single most important thing I can do.
What I urge others to do
If you’re not vaccinated, or your status is lapsed, go book a shot! The risks are low, and prevention is many times better than any cure.
If you own a building that people live or work in, go check it out for airflow issues. Employers and home owners will thank you.
Wear masks in “high-risk” situations: i.e. indoors with high population densities, in places where lots of vulnerable people are (e.g. disability and aged care centres) and in places where sick people congregate (e.g. doctors, hospitals). If you can manage it, do it in low-risk situations (not everyone can).
Stay home if you’re sick unless you’re getting medical attention (and go straight home after you’re done).
Above all: do not judge those for their mask-wearing choice either way — some just can’t wear masks at all, some will wear them all the time.
As we enter 2024, one technology seems to be looming large over many facets of society. Back in the 1960s, the idea that a “machine” (“computers” were actually people who operated calculating machines) could “think” for itself and give “intelligent” answers was the stuff of science fiction.
Television shows like StarTrek and movies/books like 2001 popularised an ever-present voice-controlled assistant that could be hailed, and asked questions or given instructions. Most of these were benevolent (2001’s HAL being a notable exception).
Fast forward nearly 80 years, and we now have voice assistants from major technology vendors like Amazon (Alexa), Apple (Siri) and Google (“OK Google”). Microsoft tried to jump in on this too with Cortana in Windows 10, since removed. Alexa and Siri are allegedly bleeding their parent companies’ income as the novelty wears off… and so these technology firms are starting to look at what’s next.
The latest gold rush seems to be generative AI. This has been brewing for some time.
Many moons ago I recall mucking around with a markov chain plug-in that was embedded in Perlbot on IRC (no_body on the old Freenode network). Very crude, but it sometimes did generate somewhat coherent sentences. It was done for fun, ran on the scrap CPU cycles of an old PIII 550MHz server that also hosted this blog and acted as a web server. Nothing huge by any stretch of the imagination. No GPU in sight.
A few years ago, we started seeing articles about an AI system that could generate imagery. Fore-runners of the likes of DALL-E. Ask it to generate a beach scene, you’d get some weird psychedelic image which vaguely looked like a beach if you squinted right, but with odd things merged together, like a seagull merged into a railing or building. Faces were badly distorted, nothing looked “right”.
Unfortunately, I cannot recall where I saw the image I’m thinking of or what keywords to search that will summon it. Otherwise I’d show an actual example. (I think it was either on The Register or Ars Technica… most likely pre-pandemic.)
Fast forward to 2021, and yes, it could generate a vaguely believable image, but it still struggled with human anatomy. A good example of this is the faked Donald Trump arrest photo that was doing the rounds:
Donald Trump being arrested by police? No, this is an AI-generated image. (Source)
This was a big improvement on what came a few years before it, but it still had lots of visual defects.
This time last year, ChatGPT v3 was available to the general public, and it could passably converse with people. For a statistical model, it did a remarkable job of appearing “intelligent”, but ask it to perform some basic tasks, and it soon fell apart. Yes, it could generate code, but you’d constantly have to massage the prompt to get code that even compiled, let alone functioned the way required.
The big rub with all of this, is the extreme amount of computation required to render the result of a simple prompt. Whether the output be text, an image, audio or video… generative AI is often highly computationally expensive, requiring vast data centres crammed full with GPUs and special-purpose ASICs much like the cryptocurrency rigs of a few years ago. There are some small models that can run on your local computer. A top-of-the-line Raspberry Pi can just cram in some AI models with some trade-offs in accuracy, however you cannot train an AI model with such modest hardware.
Generating the models is the real sticking point: it requires vast compute resources, and in addition, lots of data. It’s Johnny 5 on steroids! Where is that data sourced from? More often than not, it is scraped from websites without authors’ consent. While some content is public-domain, there are examples where copyrighted material was used.
Yes, we can point and laugh when an AI hallucinates a watermark, but for the copyright holder or would-be user, this is really no laughing matter. Microsoft is already facing a lawsuit from The Times over Bing Chat (now Copilot) spitting out big chunks of copyrighted articles.
A human usually has a vague idea where they learned something, even if they can’t find it later… and based on that knowledge, they might have some idea whether such content can be legally used in some given context, or can at least ask. AIs typically do not tell you what source material was used in the construction of the output, nor is there any consideration given to whether you can legally use that material.
Some vendors try to make that your problem, MailChimp recently added an AI feature to its mailing list offering, but then made the user responsible for checking up whether the content it generated was appropriately licensed… and decided that your user-generated content was appropriate to feed the training of said AI engine.
It has been ruled in various courts that as purely AI generated content is not “human generated”, it is not eligible for copyright protection. (This ruling is why I was able to include the “Trump arrest” image above despite it not being “my work”.)
This is not the last we’ll see of this technology. AI is actually a very old term dating back to the very early days of programmable electronic computers, from ELIZA (which really was a testing ground for pattern matching, not AI at all!) and PARRY (which was the same idea expanded a little). It includes tools like expert systems. Anyone that’s dealt with open-source software will have seen one very famous expert system: make.
Having a system that can inspect a photo and then describe what is in the image along with reading out what text might be important, would be a game changer to the visually impaired. In this case, it’s simply describing what is there.
Having a text to speech tool that could be trained on recordings of the voice of someone who has lost their ability to speak (e.g. motor neuron), that the person could then use to communicate, would be a very noble use of generative AI.
The surviving members of The Beatles recently did this with the song “Now and Then“, taking old recordings of John Lennon’s demos, and basically doing some sophisticated signal processing to separate out the components so that a studio-grade recording could be produced.
The technology does have good uses. In both the latter cases, we’re not “putting words into the mouths” of these people, it’s their words, they chose them.
However, I think this year we’ll likely see its dark side, if we haven’t done already. Stephen Fry got a rude shock when he came across an audio book apparently “read” by him, except it was a book he had never actually read: it was the product of generative AI. Someone had trained a text-to-speech model on his voice, then fed this book into it.
Imagine someone using tools like that to dupe a work colleague into resetting a password and enrolling a new 2FA token over the telephone? Depending on where you work, that could have disastrous consequences.
For this reason, I flatly refuse to touch Microsoft Teams. Last time I used it (in my browser), it was for one particular meeting a couple of years ago… it picked up I had a headset, and used that for speaker audio, but when it came to the microphone, did it use the same place? Noooo… the line-in socket connected to an old Sony ST-2950F stereo tuner was more interesting!
Since then, it too has gotten the AI treatment, with little transparency on what that AI is trained on and what its functions are. It’s not clear what it is being trained on, and what the resulting data sets are used for. Furthermore, we’re to trust them to store such training data responsibly? The same mob that wrote code that accepted an expired and incorrectly signed digital certificate as an access-all-areas pass?
That said, the snake-oil salesmen are out in force, and the investors are going wild. We’re seeing ChatGPT-powered sales and service bots appear on all kinds of websites now (until they’re caught out). There are also lots of sites with AI-generated screed polluting search engine results. It’ll likely play a big part in the upcoming 2024 US Federal election. We’re in for a wild year I think.
I for one, do not use ChatGPT or its elk in my day-to-day work, and refuse to do so. My position on AI-infused tools like Microsoft Teams remains the same until such time as the AI feature is removed or its role better clarified.
I still have code up on Github as it was there prior to Microsoft’s purchase of that service: I don’t like that my code may be being used in this manner, however the worst case scenario is copyright infringement — removing my code from Github does not prevent this. I regard video and audio differently: as this can be used for impersonation, I am not going to willingly supply such a feed directly into a tool that may be training itself on it for purposes unknown to me.
Right now, LLMs (large language models) are approaching the “peak of inflated expectations” phase of the Gartner hype cycle. I figure they will die off before long before their actual utility comes to the fore. They may improve the accuracy of machine language translations, specialised ones might be able to give domain-specific advice on a topic (much like a fancy expert system), and they may be able to fill in the gaps where a human can’t be there 100% of the time.
They won’t be replacing artists, journalists, programmers, etc long-term. Some of us will possibly lose jobs temporarily, but once the limitations are realised, I have a feeling those laid off will soon be fielding enquiries from those wishing to slay the monster they just created. It’ll just be a matter of time.
Recent Comments