ManiPylator focusing its laser pointer at a page.

Simulation And Motion Planning For 6DOF Robotic Arm

[Leo Goldstien] recently got in touch to let us know about a fascinating update he posted on the Hackaday.io page for ManiPylator — his 3D printed Six degrees of freedom, or 6DOF robotic arm.

This latest installment gives us a glimpse at what’s involved for command and control of such a device, as what goes into simulation and testing. Much of the requisite mathematics is introduced, along with a long list of links to further reading. The whole solution is based entirely on free and open source (FOSS) software, in fact a giant stack of such software including planning and simulation software on top of glue like MQTT message queues.

The practical exercise for this installment was to have the arm trace out the shape of a heart, given as a mathematical equation expressed in Python code, and it fared quite well. Measurements were taken! Science was done!

We last brought you word about this project in October of 2024. Since then, the project name has changed from “ManiPilator” to “ManiPylator”. Originally the name was a reference to the Raspberry Pi, but now the focus is on the Python programming language. But all the bot’s best friends just call him “Manny”.

If you want to get started with your own 6DOF robotic arm, [Leo] has traced out a path for you to follow. We’d love to hear about what you come up with!

Continue reading “Simulation And Motion Planning For 6DOF Robotic Arm”

Work, Eat, Sleep, Repeat: Become A Human Tamagotchi

When [Terence Grover] set out to build a Tamagotchi-inspired simulator, he didn’t just add a few modern tweaks. He ditched the entire concept and rebuilt it from the ground up. Forget cute wide-eyed blobby animals and pixel-poop. This Raspberry Pi-powered project ditches nostalgia in favour of brutal realism: inflation, burnout, capitalism, and the occasional existential crisis. Think Sims meets cyberpunk, rendered charmingly in Python on a low-res RGB LED matrix.

Instead of hunger and poop meters, this dystopian pet juggles Maslow’s hierarchy: hunger, rest, safety, social life, esteem, and money. Players make real-life-inspired decisions like working, socialising, and going into education – each affecting the stats in logical (and often unfair) ways. No free lunch here: food requires money, money requires mind-numbing labour, and labour tanks your rest. You can even die of overwork à la Amazon warehouse. The UI and animation logic are all hand-coded, and there’s a working buzzer, pixel-perfect sprite movement, and even mini-games to simulate job repetition.

It’s equal parts social commentary and pixel art fever dream. While we have covered Tamagotchi recreations some time ago, this one makes you the needy survivor. Want your own dystopia in 64×32? Head over to [Terence Grover]’s Github and fork the full open source code. We’ll be watching. The Tamagotchi certainly is.

Continue reading “Work, Eat, Sleep, Repeat: Become A Human Tamagotchi”

Hardware Built For Executing Python (Not Pythons)

Lots of microcontrollers will accept Python these days, with CircuitPython and MicroPython becoming ever more popular in recent years. However, there’s now a new player in town. Enter PyXL, a project to run Python directly in hardware for maximum speed.

What’s the deal with PyXL? “It’s actual Python executed in silicon,” notes the project site. “A custom toolchain compiles a .py file into CPython ByteCode, translates it to a custom assembly, and produces a binary that runs on a pipelined processor built from scratch.” Currently, there isn’t a hard silicon version of PyXL — no surprise given what it costs to make a chip from scratch. For now, it exists as logic running on a Zynq-7000 FPGA on a Arty-Z7-20 devboard. There’s an ARM CPU helping out with setup and memory tasks for now, but the Python code is executed entirely in dedicated hardware.

The headline feature of PyXL is speed. A comparison video demonstrates this with a measurement of GPIO latency. In this test, the PyXL runs at 100 MHz, achieving a round-trip latency of 480 nanoseconds. This is compared to MicroPython running on a PyBoard at 168 MHz, which achieves a much slower 15,000 nanoseconds by comparison. The project site claims PyXL can be 30x faster than MicroPython based on this result, or 50x faster when normalized for the clock speed differences.

Python has never been the most real-time of languages, but efforts like this attempt to push it this way. The aim is that it may finally be possible to write performance-critical code in Python from the outset. We’ve taken a look at Python in the embedded world before, too, albeit in very different contexts.

Continue reading “Hardware Built For Executing Python (Not Pythons)”

Non-planar 3d-print on bed

Improved And Open Source: Non-Planar Infill For FDM

Strenghtening FDM prints has been discussed in detail over the last years. Solutions and results vary as each one’s desires differ. Now [TenTech] shares his latest improvements on his post-processing script that he first created around January. This script literally bends your G-code to its will – using non-planar, interlocking sine wave deformations in both infill and walls. It’s now open-source, and plugs right into your slicer of choice: PrusaSlicer, OrcaSlicer, or Bambu Studio. If you’re into pushing your print strength past the limits of layer adhesion, but his former solution wasn’t quite the fit for your printer, try this improvement.

Traditional Fused Deposition Modeling (FDM) prints break along layer lines. What makes this script exciting is that it lets you introduce alternating sine wave paths between wall loops, removing clean break points and encouraging interlayer grip. Think of it as organic layer interlocking – without switching to resin or fiber reinforcement. You can tweak amplitude, frequency, and direction per feature. In fact, the deformation even fades between solid layers, allowing smoother transitions. Structural tinkering at its finest, not just a cosmetic gimmick.

This thing comes without needing a custom slicer. No firmware mods. Just Python, a little G-code, and a lot of curious minds. [TenTech] is still looking for real-world strength tests, so if you’ve got a test rig and some engineering curiosity, this is your call to arms.

The script can be found in his Github. View his full video here , get the script and let us know your mileage!

Continue reading “Improved And Open Source: Non-Planar Infill For FDM”

Writing A GPS Receiver From Scratch

GPS is an incredible piece of modern technology. Not only does it allow for locating objects precisely anywhere on the planet, but it also enables the turn-by-turn directions we take for granted these days — all without needing anything more than a radio receiver and some software to decode the signals constantly being sent down from space. [Chris] took that last bit bit as somewhat of a challenge and set off to write a software-defined GPS receiver from the ground up.

As GPS started as a military technology, the level of precision needed for things like turn-by-turn navigation wasn’t always available to civilians. The “coarse” positioning is only capable of accuracy within a few hundred meters so this legacy capability is the first thing that [Chris] tackles here. It is pretty fast, though, with the system able to resolve a location in 24 seconds from cold start and then displaying its information in a browser window. Everything in this build is done in Python as well, meaning that it’s a great starting point for investigating how GPS works and for building other projects from there.

The other thing that makes this project accessible is that the only other hardware needed besides a computer that runs Python is an RTL-SDR dongle. These inexpensive TV dongles ushered in a software-defined radio revolution about a decade ago when it was found that they could receive a wide array of radio signals beyond just TV.

Import GPU: Python Programming With CUDA

Every few years or so, a development in computing results in a sea change and a need for specialized workers to take advantage of the new technology. Whether that’s COBOL in the 60s and 70s, HTML in the 90s, or SQL in the past decade or so, there’s always something new to learn in the computing world. The introduction of graphics processing units (GPUs) for general-purpose computing is perhaps the most important recent development for computing, and if you want to develop some new Python skills to take advantage of the modern technology take a look at this introduction to CUDA which allows developers to use Nvidia GPUs for general-purpose computing.

Of course CUDA is a proprietary platform and requires one of Nvidia’s supported graphics cards to run, but assuming that barrier to entry is met it’s not too much more effort to use it for non-graphics tasks. The guide takes a closer look at the open-source library PyTorch which allows a Python developer to quickly get up-to-speed with the features of CUDA that make it so appealing to researchers and developers in artificial intelligence, machine learning, big data, and other frontiers in computer science. The guide describes how threads are created, how they travel along within the GPU and work together with other threads, how memory can be managed both on the CPU and GPU, creating CUDA kernels, and managing everything else involved largely through the lens of Python.

Getting started with something like this is almost a requirement to stay relevant in the fast-paced realm of computer science, as machine learning has taken center stage with almost everything related to computers these days. It’s worth noting that strictly speaking, an Nvidia GPU is not required for GPU programming like this; AMD has a GPU computing platform called ROCm but despite it being open-source is still behind Nvidia in adoption rates and arguably in performance as well. Some other learning tools for GPU programming we’ve seen in the past include this puzzle-based tool which illustrates some of the specific problems GPUs excel at.

Cyanotype Prints On A Resin 3D Printer

Not that it’s the kind of thing that pops into your head often, but if you ever do think of a cyanotype print, it probably doesn’t conjure up thoughts of modern technology. For good reason — the monochromatic technique was introduced in the 1840s, and was always something of a niche technology compared to more traditional photographic methods.

The original method is simple enough: put an object or negative between the sun and a UV-sensitive medium, and the exposed areas will turn blue and produce a print. This modernized concept created by [Gabe] works the same way, except both the sun and the negative have been replaced by a lightly modified resin 3D printer.

A good chunk of the effort here is in the software, as [Gabe] had to write some code that would take an image and turn it into something the printer would understand. His proof of concept was a clever bit of Python code that produced an OpenSCAD script, which ultimately converted each grayscale picture to a rectangular “pixel” of variable height. The resulting STL files could be run through the slicer to produce the necessary files to load into the printer. This was eventually replaced with a new Python script capable of converting images to native printer files through UVtools.

On the hardware side, all [Gabe] had to do was remove the vat that would usually hold the resin, and replace that with a wooden lid to both hold the UV-sensitized paper in place and protect the user’s eyes. [Gabe] says there’s still some room for improvement, but you wouldn’t know it by looking at some of the gorgeous prints he’s produced already.

No word yet on whether or not future versions of the project will support direct-to-potato imaging.