You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: code math.md
+7-4Lines changed: 7 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -88,17 +88,20 @@ Fading is done through exponential decay of each color in each pixel. The `fade(
88
88
Technically this function never reaches zero, but the strip has its cutoff (values < 0.5 are 0), and this is where the fade function simply iterates to the next pixel if one is completely off.
89
89
90
90
We can gauge how long it will take to snuff out a light completely by simply solving for x when the function equals 0.49, and knowing that each pass of loop is at least 30 ms (because of the delay call) we can get a lower-end estimate for how long it will take to completely fade out a pixel. Here's this written out for `fade()` called with `0.95` and `0.99` for an R, G, or B value of 255:
So you can see that a fade coefficient of 0.95 would take about 3.5 seconds to completely fade out a full light, assuming extra operations done during the loop are of negligible time.
Even a difference of a few _hundredths_ has exaggerated the magnitude of the effect, more than squaring it! A linear approach can also be taken by merely subtracting the same number from each light each pass (i.e. 255-Y×x where Y is the number you're subtracting each pass).
99
101
100
102
A graphical comparison that would fade out at the same time:
Currently `fade()` uses the exponential function since it more accurately models how fire burns out, and I think that caters to a more natural-looking aesthetic, but simply changing `split(col, j) * damper;` to `split(col, j) - damper;` would make it linear if this is something you want to try.
104
107
@@ -112,13 +115,13 @@ Of all the sections, this one probably required the most refinement and testing,
112
115
113
116
The basics of "bumps" is that they are a _relatively_ large, _positive_ change in volume (i.e. current volume - last volume > 0, where at least the current volume is not noise). The easiest way to think about this is a bass drum beat, where you have quick and hard changes in volume very regularly. While bumps were initially intended to follow the beat, by nature of the data they tend to follow other patterns of a song as well, which isn't necessarily a bad thing. Because of this drift toward other patterns in the sound, it becomes a little harder to talk about bumps, but there namesake is decent enough: just "bumps" in volume regardless of what the actual sound is.
114
117
115
-
As mentioned previously, bumps were intended to model the bass drum or the "beat" of a given song, so that intent is still the focus; any deviating responsiveness that occurs is just an added bonus. As such, the current implementation uses a similar sequenced average discussed in the **"Averaging"** section, but instead for an average positive change in volume instead of just volume. Consider the following table:
118
+
As mentioned, bumps were intended to model the bass drum or the "beat" of a given song, so that intent is still the focus; any deviating responsiveness that occurs is just an added bonus. As such, the current implementation uses a similar sequenced average discussed in the **"Averaging"** section, but instead for an average positive change in volume instead of just volume. Consider the following table:
A positive change in volume was read out every time it occurred (so this does not show decreases in volume). So, how do we decide which of these bumps are worthy enough to represent the beat? Well, the answer is convoluted.
120
123
121
-
There are two many ways to approach this problem I've found, and that's whether the threshold for a "bump" is a constant or dynamic. Both methods have their pros and cons, but I've opted for the dynamic method which utilizes a sequenced average of bumps. How the average of bumps is coded as: `if (volume - last > 0) avgBump = (avgBump + (volume - last)) / 2.0;`
124
+
There are two main ways to approach this problem I've found, and that's whether the threshold for a "bump" is a constant or dynamic. Both methods have their pros and cons, but I've opted for the dynamic method which utilizes a sequenced average of bumps. How the average of bumps is coded as: `if (volume - last > 0) avgBump = (avgBump + (volume - last)) / 2.0;`
122
125
123
126
Essentially, every positive change in volume is averaged in. Here is the value of the average bump level (in red) of the bumps above:
@@ -141,7 +144,7 @@ The difference may be hard to see at first glance, but there is a much less regu
141
144
142
145
The threshold-based eliminates quieter "noise" bumps, however they create louder noise bumps. The average-based method has some quiet noise bumps, but these are proportionally allowed, in louder environments they eliminate those quieter bumps, while preventing the formation of louder noise bumps.
143
146
144
-
The average-based method is important when it comes to long tones, like a key being held for a while. To our ear, there are no "bumps" of any sort in this kind of sound, but the volume differences trigger bumps in the program. They are very obvious with the threshold based method, but the average-based method tends to mitigate too much responsiveness during these types of sounds, so it was ultimately the method that was implemented in the code.
147
+
The average-based method is useful when it comes to long tones, like a key being held for a while. To our ear, there are no "bumps" of any sort in this kind of sound, but the volume differences trigger bumps in the program. They are very obvious with the threshold based method, but the average-based method tends to mitigate too much responsiveness during these types of sounds, so it was ultimately the method that was implemented in the code.
0 commit comments