On Sun, 12 Nov 2017, Evan Laforge wrote: > I see, I guess you mean NodeList.lookup is looking in a binary tree. right > Right, I'm not worried about approximation, as long as it's not too > approximate. It's just that I'm not sure about the complexity of the > implementation for those operations. An approach could be: Implement (zipWith f xs ys) in some way and then use curve fitting in order to construct approximate cubic functions. An exact implementation would have to be implemented for every operation, and might be feasible only for basic operations like multiplication. A general solution might sample the curves. The Fit example shows how curve fitting works. Alternatively you may warrant the correct function application only at the nodes and perform some differentation rules there. E.g. if you have node (ya,dya) from the first function and (yb,dyb) from the second function you may set the node of the multiplied curve using the Product rule to: y = ya*yb dy = ya*dyb + dya*yb > Also I see I had to do some awkward binary search guessing to find the > value at a specific 'x'. I forget the details now but I guess since > the bezier's output is (x, y) pairs, I can't just directly find the y > for a given 'x', hence the searching. That's probably pretty > inefficient and a sign I should have used a better technique. It > looks like Piece.hermite1 takes an arbitrary x and directly computes > the result, so it doesn't have that problem. Searching should be only necessary when you compute values of a function that is a concatenation of some basic pieces. Piece.hermite1 interpolates within one piece, thus no search. The same would apply to Bezier curves. > What's the difference with the other Types, like cubicLinear and > cubicParabola, given that interpolatePiece functions are all the same? > I gather the Basis.coefficientsToCubicLinear and CubicParabola are just > using different techniques and inputs to compute hermite1 values, which > ultimately goes into the same kind of interpolation. Right, they all are different ways to determine the cubic polynomials. Once you have the cubic polynomials, their interpolation is always the same. > I guess the idea with the Hermite spline is that since in and out > slope are the same, you automatically get a smooth function. right > But on the other hand, how do you influence how sharp the curve is? > With the bezier implementation I can get back to linear interpolation by > setting both weights to 0, or get a sigmoid kind of shape by setting > them both to 0.5. Sigmoid type would be slopes=0 for Hermite interpolation, Linear would be slopes=(y1-y0)/(x1-x0). > However, the Wikipedia page claims that Hermite and Bezier are simply > different representations for the same thing, and can be converted > mechanically, so surely they should have the same capabilities? Right, if of the same order they span the same set of functions, here: cubic polynomials. More generally: n degrees of freedom require polynomials of order (n-1).

# Posts in Haskell Art

On Sun, Nov 12, 2017 at 3:39 PM, Henning Thielemann <email obscured>> wrote: > > On Sun, 12 Nov 2017, Evan Laforge wrote: > >> Oh ok, I'll take a closer look at the source. I thought the >> underlying representation was a list, but it's hard to tell how >> anything works with no documentation. > > Please look at the two examples. I see, I guess you mean NodeList.lookup is looking in a binary tree. > Multiplication cannot be done exactly. You could try an approximation. > > For max(f,g) you need to solve the cubic equation f(x)-g(x)=0 and split at > the zeros of f-g. > > For ad-hoc functions you need an approximation, again. But ... audio rate > sampling is approximation, too, isn't it? Right, I'm not worried about approximation, as long as it's not too approximate. It's just that I'm not sure about the complexity of the implementation for those operations. If it's too hard for me to figure out, then it's as good as impossible! If it's possible, but makes me have to think really hard for a long time every time I want to add some new operation, or worse wind up with something which is not obviously correct and hence buggy, then that's not too great either. I tend to experiment with possibly "unprincipled" ones which are nevertheless useful in practice, for instance I quite frequently use a kind of scaling where (-1, 0) scales from 0 to x, and (0, 1) scales from x to the max, which is usually 1, and a nice thing about the constant samples is you just give the function to a 'zipWithY :: (Y -> Y -> Y) -> Signal -> Signal -> Signal' and you're done. Anyway, there's nothing for it but to try and see how it looks like it'll turn out. Thanks for the reference to cubic hermite splines. I previously used cubic bezier curves, and muddled through the implementation from various wikipedia pages. It takes two weights, which influence flatness at the beginning and end, which is ok to use, but less general than a slope. Also I see I had to do some awkward binary search guessing to find the value at a specific 'x'. I forget the details now but I guess since the bezier's output is (x, y) pairs, I can't just directly find the y for a given 'x', hence the searching. That's probably pretty inefficient and a sign I should have used a better technique. It looks like Piece.hermite1 takes an arbitrary x and directly computes the result, so it doesn't have that problem. What's the difference with the other Types, like cubicLinear and cubicParabola, given that interpolatePiece functions are all the same? I gather the Basis.coefficientsToCubicLinear and CubicParabola are just using different techniques and inputs to compute hermite1 values, which ultimately goes into the same kind of interpolation. I guess the idea with the Hermite spline is that since in and out slope are the same, you automatically get a smooth function. But on the other hand, how do you influence how sharp the curve is? With the bezier implementation I can get back to linear interpolation by setting both weights to 0, or get a sigmoid kind of shape by setting them both to 0.5. However, the Wikipedia page claims that Hermite and Bezier are simply different representations for the same thing, and can be converted mechanically, so surely they should have the same capabilities?

On Sun, 12 Nov 2017, Evan Laforge wrote: > Oh ok, I'll take a closer look at the source. I thought the > underlying representation was a list, but it's hard to tell how > anything works with no documentation. Please look at the two examples. > It seems like arbitrary math operations might be tricky though. All > you need to resample the signals to have the same breakpoints is a > split for the curve, but what about multiplication, or max, or some > more ad-hoc function? Multiplication cannot be done exactly. You could try an approximation. For max(f,g) you need to solve the cubic equation f(x)-g(x)=0 and split at the zeros of f-g. For ad-hoc functions you need an approximation, again. But ... audio rate sampling is approximation, too, isn't it?

On Sun, Nov 12, 2017 at 9:44 AM, Henning Thielemann <email obscured>> wrote: > It does the binary search for you. Summing two curves and doing the merge > would be a nice addition, though. Oh ok, I'll take a closer look at the source. I thought the underlying representation was a list, but it's hard to tell how anything works with no documentation. It seems like arbitrary math operations might be tricky though. All you need to resample the signals to have the same breakpoints is a split for the curve, but what about multiplication, or max, or some more ad-hoc function? I guess constant samples work with anything, linear ones only work for linear functions (so multiplication yes, but something like max or an ad-hoc function requires an ad-hoc solution), and curves... maybe you wind up having to just sample them. But that means that a fancy curve representation could degenerate to flat samples as soon as the "wrong kind" of transformation is applied, which in practice might mean just about always. Of course linear has the same kind of problem, but maybe it's easier to come up with the ad-hoc solution (e.g. for max, find where they cross and splice there). For instance, long ago I actually originally used the linear segments representation, but even that gets hard for integration, while for flat samples it's trivial. Now that I think of it, I think that's one of the reasons I gave up on linear segments way back when. I guess you can find where curves cross in exactly the same way as lines, but somehow it seems like it might be more complicated. I'll try sketching out all of the operations I'll need.

On Sun, 12 Nov 2017, Evan Laforge wrote: > What I was thinking of in terms of API is operations like merge A with > B, or slice some time out of A, or shift A by t. It looks like maybe > the 'interpolate' package is not quite that high level? It does the binary search for you. Summing two curves and doing the merge would be a nice addition, though.

On Sun, Nov 12, 2017 at 7:52 AM, Henning Thielemann <email obscured>> wrote: > I had good experiences with piecewise polynomial functions, e.g. cubic > polynomials. The representation would be [Piece], with > > data Piece = Piece {duration,y0,y1,dy0,dy1 :: Double} This seems basically like the linear version, only with a fancier kind of interpolation. I guess I can recover whether its linear by looking at the values in there. With splines of course I could remain accurate and do away with srate for a wider variety of curves... maybe even all of them. I think all the shapes I use could either be represented exactly, or would fit well enough. Using duration I guess means you can sample from the start efficiently, but to find the value at a certain point you have to scan and sum durations. I guess I could change that to binary search by using start time instead of duration, at the cost of a "must be ordered" invariant. What I was thinking of in terms of API is operations like merge A with B, or slice some time out of A, or shift A by t. It looks like maybe the 'interpolate' package is not quite that high level?

On Sun, 12 Nov 2017, Evan Laforge wrote: > For example, say I want a signal that goes 0 to 1 like f(x) = x^2, and goes > back to 0 like f(x) = -x. In the function-oriented representation, I could > write: > > expon t = t^2 > linear t = -t + 1 Btw. expon suggests it would be something like 2**t. If you want the square function, i.e. t->t^2, then the cubic interpolation approach would work. E.g. t->t^2 has the start and end value 0 and 1 and slopes 0 and 2 in the interval (0,1). That is, you have: f = [Piece 1 0 1 0 2, Piece 1 1 2 1 1] You will also see that your curve is not smooth at point 1. I think that the 'interpolation' package could help you.

I think Henning is right, I've been misunderstood. My understanding is that FRP is a way to structure programs that take time-varying signals as input. What I'm interested in is how to represent the signal itself. For example, say I want a signal that goes 0 to 1 like f(x) = x^2, and goes back to 0 like f(x) = -x. In the function-oriented representation, I could write: expon t = t^2 linear t = -t + 1 f t | t < 1 = expon t | t >= 1 = linear (t - 1) sampled = map f [0, 0.25 .. 2] I have to shift 'linear' in time by t+1, presumably I'd be annotating these functions with start and end times, and write a (<>) that merges them. Meanwhile, here's the same thing, but with a piecewise-linear representation: srate = 4 expon start end = [(start + t, t^2) | t <- [0, 1/srate .. end-start]] linear start end = [(start, 1), (end, 0)] f = expon 0 1 <> linear 1 2 at f t = case dropWhile ((<t) . fst) f of ((t1, y1) : (t2, y2) : _) -> (y2 - y1) / (t2 - t1) * (t - t1) + y1 sampled = map (at f) [0, 0.25 .. 2] Notice that 'expon' can only be approximated with a sampling rate, but linear works out exactly because it corresponds to the interpolation I'm doing at the end to create 'sampled'. In the example, I think 'at' is broken because it doesn't understand coincident samples, but you get the idea. A piecewise-flat (stepwise?) version would look the same, except 'linear' would need to use 'srate' and 'at' would just emit 'y1', without linear interpolation. With the toy example, #1 is obviously simpler and more accurate, but as I said it might have memory leaks. Also, for #1 I*have*to sample the result even when the value is constant, and I don't know where the breakpoints are. That won't do, because imagine the signal represents pitch. If it gets misaligned with note attack time, even by a single audio-rate sample, the result is very different. #2 doesn't have this problem. Or, imagine the consumer of #2 is a GUI, it can draw a line with a single call, and it comes out high DPI and with antialiasing, so it likes #2. If the GUI or whatever is in another process, I have to sample #1 anyway to serialize it, and all I can do is end up with a worse #2.

On Sun, 12 Nov 2017, Francesco Ariis wrote: > On Sat, Nov 11, 2017 at 09:04:27PM -0800, Evan Laforge wrote: >> Has anyone done work with, or have recommendations for how to represent a >> possibly discontinuous function, specifically a time to float signal? > > I am pretty sure I am not saying anything revolutionary, but have > you tried to look for a FRP solution? It seems (judging by the > signatures in your message: `[(Time, Y)]`, `Time -> Y`) it could > be the case. He wants linear segments, that's not as easy with FRP since FRP is usually tailored to piecewise constant functions. In FRP you can hardly specify the slope since you do not know when the next event/node arrives, right? So FRP solves the problems that appear in live processing of data, but was that what Evan wanted?

On Sat, 11 Nov 2017, Evan Laforge wrote: > I think in the end I've more or less convinced myself to continue with > the linear segments approach, but put on an explicitly segment-oriented > API that makes it hard to mess them up, whatever that winds up looking > like. But has anyone else faced this kind of problem, or seen elegant > solutions to it? I had good experiences with piecewise polynomial functions, e.g. cubic polynomials. The representation would be [Piece], with data Piece = Piece {duration,y0,y1,dy0,dy1 :: Double} where y0 and y1 are the values at the interval boundaries and dy0 and dy1 are the slopes at the interval boundaries. You can easily model low frequency control signals with discontinuities or non-smooth points. https://en.wikipedia.org/wiki/Cubic_Hermite_spline https://hackage.haskell.org/package/interpolation

Yeap, FRP is in essence what you are describing. There's a lot of details to take into account, some of which you already mention (merging conflicting signals, time transformations, etc.). FRP is about time and, in particular, about values that change over time. In FRP, type Signal a = Time -> a (conceptually). What you have are step signals, that is, signals that are flat between points, and change sparsely. A related concept are events, that is, values that only _exist_ at specific points. Maybe not conceptually, but the type [(Time, val)] is how event (streams) are defined. Events occur only at specific times, and some FRP implementations give ways of turning event streams into step signals. I really recommend that you look at the work of Henrik Nilsson; he's a lot into music and FRP. Also take a look at the proceedings of FARM (Functional Arts workshop); this comes up all the time there. See also Euterpea. Just defended my PhD, on UIs and Games in FRP. Feel free to ping me as well, or ask more detailed questions here and I'll try to answer :)

Ivan On 12/11/17 09:17, Francesco Ariis wrote: > On Sat, Nov 11, 2017 at 09:04:27PM -0800, Evan Laforge wrote: >> Has anyone done work with, or have recommendations for how to represent a >> possibly discontinuous function, specifically a time to float signal? > I am pretty sure I am not saying anything revolutionary, but have > you tried to look for a FRP solution? It seems (judging by the > signatures in your message: `[(Time, Y)]`, `Time -> Y`) it could > be the case. > > There are *a lot* of FRP frameworks and I am far from sure which > suits your use-case. Haskell-cafe might be of help, if you briefly > describe your requirements! > > Good luck in your search > -F >

On Sat, Nov 11, 2017 at 09:04:27PM -0800, Evan Laforge wrote: > Has anyone done work with, or have recommendations for how to represent a > possibly discontinuous function, specifically a time to float signal? I am pretty sure I am not saying anything revolutionary, but have you tried to look for a FRP solution? It seems (judging by the signatures in your message: `[(Time, Y)]`, `Time -> Y`) it could be the case. There are *a lot* of FRP frameworks and I am far from sure which suits your use-case. Haskell-cafe might be of help, if you briefly describe your requirements! Good luck in your search -F

Has anyone done work with, or have recommendations for how to represent a possibly discontinuous function, specifically a time to float signal? This isn't specifically related to Haskell or to art, but I'm thinking of Haskell implementations, and anyone dealing with music or animation surely has to deal with values that change in time. The context is that I construct various signals in ad-hoc ways, but usually via concatenating segments (of various curves, but flat and linear are common), and then they turn into instructions for some backend. In the past, the main backend was MIDI, so I represented the signals as Vector (Time, Y) where both Time and Y are Double. The interpretation was that each sample sets a constant value, so to convert to MIDI I just emit the samples directly. However, this only works because MIDI is low bandwidth and we're forced to accept that the receiving synthesizer is going to be getting these rough signals and smoothing them out internally. Once I start working with my own synthesizers I need audio rate controls and this becomes really wasteful, especially since I don't know up front what the eventual backend would be. I'd be forced to use an audio level sample rate globally and then thin it out for MIDI. Since I always wind up serializing the signal in one way or another at the end, having an efficient represenation is important. This is also why the traditional fixed sampling rate is out, even though the sparse approach adds plenty of complexity (for instance, resampling both inputs to add them together). The next thought is retain the sparse [(Time, Y)] representation but interpret it as linear segments. This means a discontinuous segment actually requires two samples, e.g. [(0, 0), (1, 0), (1, 1), (2, 1)]. Leaving that with a sample-oriented API becomes seriously error prone because you have to remember to handle before and after coincident samples, and split segments when merging or slicing signals, etc. But perhaps with an explicitly segment-oriented API I could hide all of that. Perhaps have a special encoding for flat segments if they're common enough... though the obvious encodings don't actually save any space and add complexity, so maybe not bother with that part. I've never heard of anything like that though, are there any examples out there? Of course, the most idiomatic representation is surely a function Time -> Y. Not only can I concatenate curves with perfect accuracy and arbitrary resolution and leave the sampling to the backend, it also elegantly allows efficient transformations. For instance, shifting the Time is just composing addition on the front, while in a sample-oriented representation you have to either transform all the samples, or add a field for an offset and remember to have every access function take it into account. That in turn adds plenty of complexity and only works for the specific transformations hardcoded in. In practice, f(x+k) and k * f(x) serve most purposes. I haven't tried this yet, but some issues make me hesitate. One is that I lose structure. To find the inflection points I'd have to sample and see how the values change. For instance, I'll surely find myself trying to infer linear segments back out again, because various backends (including GUI) do well with linear segments. And then I worry about memory leaks. For a data structure, I can flatten the whole thing and be sure no thunks are inside, but for a function built from composing other functions, I have to make sure every single component function isn't holding on to anything it doesn't need. It seems very dangerous. So maybe in the pure form the function is out. Maybe there's some kind of hybrid approach, with a pair of a function and a vector annotations of where the break points are, with say Annotation = Flat | Linear | Other. I'd have to transform them together, so I still wind up with a Vector (Time, Annotation) with some of the same problems as the (Time, Y) samples, but maybe it's doable. But even if it is, I might not need the additional accuracy over approximation with linear segments, and I don't see any way around the memory leak problem. Still, it would be interesting to hear if there are implementations of this kind of approach. I think in the end I've more or less convinced myself to continue with the linear segments approach, but put on an explicitly segment-oriented API that makes it hard to mess them up, whatever that winds up looking like. But has anyone else faced this kind of problem, or seen elegant solutions to it?

Thanks!

On Tue, Nov 7, 2017 at 4:39 AM, Ivan Perez <email obscured>> wrote: > They were. > > I suspect the work needed for this may be more than may seem at first: I > should have received a message to verify something about my Haskell > Symposium and FARM talks, and haven't received either of them so far. > > I can contact the FARM organiser on your behalf if you want :) Yes please, if you think it will help. I guess it's already in the pipeline, but maybe a reminder would encourage whoever is doing the work.

They were. I suspect the work needed for this may be more than may seem at first: I should have received a message to verify something about my Haskell Symposium and FARM talks, and haven't received either of them so far. I can contact the FARM organiser on your behalf if you want :)

Ivan On 07/11/17 08:42, Evan Laforge wrote: > Does anyone know if these were recorded? All I can find are the > livestream recordings https://livestream.com/oxuni/ICFP-2017 which are > not organized, but don't seem to include the FARM session. >

Does anyone know if these were recorded? All I can find are the livestream recordings https://livestream.com/oxuni/ICFP-2017 which are not organized, but don't seem to include the FARM session.

**alex**AlgoMech festival of Algorithmic and Mechanical Movement, Sheffield UK, 8-12 November 2017 Posted at

*12:40am, Nov 05*

*ALGOMECH*Festival of Algorithmic and Mechanical Movement 8-12th November 2017 Sheffield city centre http://algomech.com/ <http://algomech.com/?e> Celebration of algorithms and mechanisms in the arts - Premiere of new work from 65daysofstatic <http://algomech.com/2017/events/65dos/?e> - Full-on Algorave <http://algomech.com/2017/events/algorave/?e> - dance music from algorithms + mechanisms - Sonic Pattern <http://algomech.com/2017/events/sonic-pattern/?e> concert - Symposium on Unmaking <http://algomech.com/2017/events/unmaking-symposium/?e> - Exhibition <http://algomech.com/2017/events/exhibition/?e> of Algorithmic and Mechanical Art - Workshops - Pulse+Patterns <http://algomech.com/2017/events/pulse-patterns/?e>, E-Textiles <http://algomech.com/2017/events/etextiles/?e>, Algorave Academy <http://algomech.com/2017/events/algorave-academy/?e> Full programme and booking: http://algomech.com/ <http://algomech.com/?e> Funded by Arts Council England and PRS Foundation

I've tried to do the same, but no avail :( Looking forward to know if you make this work! El sΓ‘b., 7 oct. 2017 a las 9:20, numa <email obscured>>) escribiΓ³:

> Hi > > I readed these posts a time ago and intended to solve the problem by my > own, but I could not. > > The following Vivid code does not work for Tidal: > > defineSD $ sdNamed "mysine" () $ do > s <- sinOsc (freq_ 440) > o <- out_ 0 > out o [s, s] > > I think that Tidal can not know about the synths written from Vivid > because Tidal works with synth SynthDesc, no with SynthDef names. > > In SuperCollider you create a SynthDesc using the .add messsage: > > SynthDef("mysine", { | freq = 800, out | Out.ar(out, SinOsc.ar(freq, 0, > 0.5)) }).add; > > Now, from Tidal, you can do: > > d1 $ n "e5*4 d5*2" # s "mysine" > d1 silence > > To browse the properties of SynthDescs, from SuperCollider you can do: > > SynthDescLib.global.browse; > > Remove the SynthDesc: > > SynthDef.removeAt("mysine"); > > In the SuperCollider help page about SynthDesc you can read: "SynthDescs > are needed by the event stream system, so when using Pbind, the > instrument's default parameters are derived from the SynthDesc." > > Now, from SuperCollider, try: > > ~sine = SynthDef("sine3", { | freq=800, out | Out.ar(out, SinOsc.ar(freq, > 0, 0.5))}) > b = ~sine.asBytes > t = CollStream(b) > l = SynthDescLib.getLib(\global); > d = l.readDescFromDef(t, true, ~sine) > s.sendMsg("/d_recv", b) > > And, from Tidal: > > d1 $ n "e5*4 d5*2" # s "mysine" > d1 silence > > This works for me. And you can see the SynthDesc "mysine" in the browser. > > In SuperCollider, to remove the synthdesc: > > SynthDef.removeAt("mysine"); > > I don't know how to solve this problem from Vivid, but wait that this info > can help. > > > > -- > > Read the whole topic here: Haskell Art: > http://lurk.org/r/topic/4wr0kYm8HizhcnoQnI41k5 > > To leave Haskell Art, email <email obscured> with the following > email subject: unsubscribe > -- nikita tchayka . functional data scientist { nickseagull.github.io }

Hi I readed these posts a time ago and intended to solve the problem by my own, but I could not. The following Vivid code does not work for Tidal: defineSD $ sdNamed "mysine" () $ do s <- sinOsc (freq_ 440) o <- out_ 0 out o [s, s] I think that Tidal can not know about the synths written from Vivid because Tidal works with synth SynthDesc, no with SynthDef names. In SuperCollider you create a SynthDesc using the .add messsage: SynthDef("mysine", { | freq = 800, out | Out.ar(out, SinOsc.ar(freq, 0, 0.5)) }).add; Now, from Tidal, you can do: d1 $ n "e5*4 d5*2" # s "mysine" d1 silence To browse the properties of SynthDescs, from SuperCollider you can do: SynthDescLib.global.browse; Remove the SynthDesc: SynthDef.removeAt("mysine"); In the SuperCollider help page about SynthDesc you can read: "SynthDescs are needed by the event stream system, so when using Pbind, the instrument's default parameters are derived from the SynthDesc." Now, from SuperCollider, try: ~sine = SynthDef("sine3", { | freq=800, out | Out.ar(out, SinOsc.ar(freq, 0, 0.5))}) b = ~sine.asBytes t = CollStream(b) l = SynthDescLib.getLib(\global); d = l.readDescFromDef(t, true, ~sine) s.sendMsg("/d_recv", b) And, from Tidal: d1 $ n "e5*4 d5*2" # s "mysine" d1 silence This works for me. And you can see the SynthDesc "mysine" in the browser. In SuperCollider, to remove the synthdesc: SynthDef.removeAt("mysine"); I don't know how to solve this problem from Vivid, but wait that this info can help.

Hey, I tried to follow these instructions but not having any luck so far. I filed an issue here to document my findings: https://github.com/vivid-synth/vivid/issues/3 But I guess you would probably prefer to discuss here! I don't mind either way.