I don't know if this is relevant, but in tidalcycles (tidalcycles.org) I dealt with the problem of how to sample a function of (rational) time by instead working with functions of timespans, basically (Time, Time) -> [(Time, Time, a)], i.e. a function of start and end times, which returns a list of values and the timespans which they are active within (which will of course intersect with the input timespan). This can be used to represent functions of either continuous or discrete time. In the former case, you can just take the midpoint of the given input timespan to compute the output from. So the duration of the input timespan is a bit like the resolution of the sampling rate. With both continuous and discrete functions of time represented in the same type, it becomes very easy to compose them together.

On 12 November 2017 at 05:04, Evan Laforge <email obscured>> wrote: > Has anyone done work with, or have recommendations for how to represent a > possibly discontinuous function, specifically a time to float signal? > > This isn't specifically related to Haskell or to art, but I'm thinking of > Haskell implementations, and anyone dealing with music or animation surely > has to deal with values that change in time. > > The context is that I construct various signals in ad-hoc ways, but usually > via concatenating segments (of various curves, but flat and linear are > common), and then they turn into instructions for some backend. In the past, > the main backend was MIDI, so I represented the signals as Vector (Time, Y) > where both Time and Y are Double. The interpretation was that each sample > sets a constant value, so to convert to MIDI I just emit the samples > directly. > > However, this only works because MIDI is low bandwidth and we're forced to > accept that the receiving synthesizer is going to be getting these rough > signals and smoothing them out internally. Once I start working with my own > synthesizers I need audio rate controls and this becomes really wasteful, > especially since I don't know up front what the eventual backend would be. > I'd be forced to use an audio level sample rate globally and then thin it out > for MIDI. Since I always wind up serializing the signal in one way or > another at the end, having an efficient represenation is important. This is > also why the traditional fixed sampling rate is out, even though the sparse > approach adds plenty of complexity (for instance, resampling both inputs to > add them together). > > The next thought is retain the sparse [(Time, Y)] representation but > interpret it as linear segments. This means a discontinuous segment actually > requires two samples, e.g. [(0, 0), (1, 0), (1, 1), (2, 1)]. Leaving that > with a sample-oriented API becomes seriously error prone because you have to > remember to handle before and after coincident samples, and split segments > when merging or slicing signals, etc. But perhaps with an explicitly > segment-oriented API I could hide all of that. Perhaps have a special > encoding for flat segments if they're common enough... though the obvious > encodings don't actually save any space and add complexity, so maybe not > bother with that part. I've never heard of anything like that though, are > there any examples out there? > > Of course, the most idiomatic representation is surely a function Time -> Y. > Not only can I concatenate curves with perfect accuracy and arbitrary > resolution and leave the sampling to the backend, it also elegantly allows > efficient transformations. For instance, shifting the Time is just composing > addition on the front, while in a sample-oriented representation you have to > either transform all the samples, or add a field for an offset and remember > to have every access function take it into account. That in turn adds plenty > of complexity and only works for the specific transformations hardcoded in. > In practice, f(x+k) and k * f(x) serve most purposes. > > I haven't tried this yet, but some issues make me hesitate. One is that I > lose structure. To find the inflection points I'd have to sample and see how > the values change. For instance, I'll surely find myself trying to infer > linear segments back out again, because various backends (including GUI) do > well with linear segments. And then I worry about memory leaks. For a data > structure, I can flatten the whole thing and be sure no thunks are inside, > but for a function built from composing other functions, I have to make sure > every single component function isn't holding on to anything it doesn't need. > It seems very dangerous. So maybe in the pure form the function is out. > Maybe there's some kind of hybrid approach, with a pair of a function and > a vector annotations of where the break points are, with say > Annotation = Flat | Linear | Other. I'd have to transform them together, so > I still wind up with a Vector (Time, Annotation) with some of the same > problems as the (Time, Y) samples, but maybe it's doable. But even if it is, > I might not need the additional accuracy over approximation with linear > segments, and I don't see any way around the memory leak problem. Still, it > would be interesting to hear if there are implementations of this kind of > approach. > > I think in the end I've more or less convinced myself to continue with the > linear segments approach, but put on an explicitly segment-oriented API that > makes it hard to mess them up, whatever that winds up looking like. But has > anyone else faced this kind of problem, or seen elegant solutions to it? > > Thanks! > > -- > > Read the whole topic here: Haskell Art: > http://lurk.org/r/topic/2wxYdLvac7CZRnjVRka8SJ > > To leave Haskell Art, email <email obscured> with the following email subject: unsubscribe -- blog: http://slab.org/