NHacker Next
login
▲Why tail-recursive functions are loopskmicinski.com
122 points by speckx 4 days ago | 130 comments
Loading comments...
hinkley 16 hours ago [-]
Practically the day after I learned about tail recursion in CS class, I learned that almost all recursive calls can be translated to iteration, that in many cases the iterative version is easier to scan, is as fast if not faster, and that they can usually handle much much larger inputs than recursion due to avoiding stack overflow.

Tail recursion is meant to fix the latter. But what we mean to happen and what actually happens ain't ever exactly similar.

Tail recursion IME is a bigger foot gun than relying on someone to add a new conditional branch at the end of a block in an iterative algorithm without fucking it up in the process. And iteration responds generally better to Extract Function. And while I can think of counter cases easily enough, in the large iteration is less work and less vigilance. And you cannot scale a project up without the vigilance requirement amortizing basically to 0 per line of code.

pdpi 11 hours ago [-]
> Tail recursion IME is a bigger foot gun

This is true for some languages, but not all.

E.g. scala has @tailrec annotations, which make it a compile error for the annotated function to not be tail recursive. Clojure doesn't have tail call elimination, but has the `recur` special form for explicit recursion that is guaranteed to not consume any stack space.Rust has reserved the `become` keyword that will eventually guarantee tail call elimination (So pretty similar to Clojure's recur, but I think Rust's version will allow mutual recursion)

Zig goes the whole hog, and has `@call(modifier, fn, args)`, where `modifier` can be things like compile_time, always_tail, always_inline, never_tail, never_inline, and a bunch other desirable guarantees you might want.

phh 1 hours ago [-]
> > Tail recursion IME is a bigger foot gun

> This is true for some languages, but not all.

Useless anecdote that tail-recursive can be a foot gun even in Scala.

I did a (screening) job interview at Datadog, they asked for "give the spare change back on that amount of money" exercise (simple variant), "in whichever language you want". I did my implementation in tail-recursive Scala (with the annotation). I ended up trying to explain that tail-recursivity doesn't explode in memory for the rest of the call (and failed)

pdpi 22 minutes ago [-]
> I ended up trying to explain that tail-recursivity doesn't explode in memory for the rest of the call (and failed)

I count that as a successful interview – They sent an interviewer who doesn't understand how tail recursion enables tail call elimination, and is either unwilling or unable to listen when is is explained to them. Sounds like the company would be a bad fit for somebody whose go-to approach is to implement the solution that way.

aw1621107 10 hours ago [-]
> Rust has reserved the `become` keyword that will eventually guarantee tail call elimination (So pretty similar to Clojure's recur, but I think Rust's version will allow mutual recursion)

Support got added to nightly quite recently, in fact - on 2025-07-31 [0].

[0]: https://github.com/rust-lang/rust/pull/144232

LandR 5 hours ago [-]
Clojure has mutual recursion via (trampoline ...)

https://clojuredocs.org/clojure.core/trampoline

tombert 16 hours ago [-]
In my mind, the biggest advantage to using tail recursion over vanilla loops is the ability to keep using persistent data structures without any (apparent) mutation.

At least in theory, a tail recursive call will be converted into a dumb jump, so there shouldn't be a performance penalty, but since from your code's perspective you're passing in the stuff for the next iteration, you can keep using the pretty and easy-to-reason-about structures without creating any kind of mutable reference.

I'm not 100% sold on tail recursion in a broader sense, but at least with Clojure's loop/recur stuff it is kind of cool to be able to keep using persistent data structures across iterations of loops.

weitendorf 15 hours ago [-]
What makes tail recursion "special" is that there exists a semantically equivalent mutable/iterative implementation to something expressed logically as immutable/recursive. [0]

Of course, this means that the same implementation could also be directly expressed logically in a way that is mutable/iterative.

func pow(uint base, uint n): n == 0 ? return 1 : return n * pow(base, n-1)

is just

func pow(uint base, uint n): uint res = 1; for(i=0; i<n; i++){ res *= n} return res

There is no real "advantage" to, or reason to "sell" anybody on tail call recursion if you are able to easily and clearly represent both implementations, IMO. It is just a compiler/runtime optimization, which might make your code more "elegant" at the cost of obfuscating how it actually runs + new footguns from the possibility that code you think should use TCO actually not (because not all immutable + recursive functions can use TCO, only certain kinds, and your runtime may not even implement TCO in all cases where it theoretically should).

As an aside, in C++ there is something very similar to TCO called copy-elision/return-value-optimization (RVO): [1]. As with TCO it is IMO not something "buy into" or sell yourself on, it is just an optimization you can leverage when structuring your code in a way similar to what the article calls "continuation passing style". And just like TCO, RVO is neat but IMO slightly dangerous because it relies on implicit compiler/runtime optimizations that can be accidentally disabled or made non-applicable as code changes: if someone wanders in and makes small semantic to changes to my code relying on RVO/TCO for performance they could silently break something important.

[0] EXCEPT in practice all implementation differences/optimizations introduce observable side effects that can otherwise impact program correctness or semantics. For example, a program could (perhaps implicitly) rely on the fact that it errors out due to stack overflow when recursing > X times, and so enabling TCO could cause the program to enter new/undesirable states; or a program could rely on a functin F making X floating point operations taking at least Y cycles in at least Z microseconds, and not function properly when F takes less than Z microseconds after enabling vectorization. This is Hyrum's Law [2].

[1] https://en.wikipedia.org/wiki/Copy_elision#Return_value_opti...

[2] https://www.hyrumslaw.com/

nothrabannosir 6 hours ago [-]
> func pow(uint base, uint n): n == 0 ? return 1 : return n * pow(base, n-1)

Nitpick: that’s not tail recursive. Something like def pow(base, n, acc=1) = match n case 0 => acc; default => pow(base, n-1, acc*base)

pyrale 1 hours ago [-]
Given that `base` is never used, I suspect that's not the only issue :)
eru 11 hours ago [-]
> Of course, this means that the same implementation could also be directly expressed logically in a way that is mutable/iterative.

Yes, compilers exist.

> There is no real "advantage" to, or reason to "sell" anybody on tail call recursion if you are able to easily and clearly represent both implementations, IMO.

Avoiding mutation avoids headaches.

> [0] EXCEPT in practice all implementation differences/optimizations introduce observable side effects that can otherwise impact program correctness or semantics. For example, a program could (perhaps implicitly) rely on the fact that it errors out due to stack overflow when recursing > X times, and so enabling TCO could cause the program to enter new/undesirable states; or a program could rely on a functin F making X floating point operations taking at least Y cycles in at least Z microseconds, and not function properly when F takes less than Z microseconds after enabling vectorization. This is Hyrum's Law [2].

These programs are likely not standards compliant. (And that's not just true for the C++ standard but for basically any language with a standard.)

> And just like TCO, RVO is neat but IMO slightly dangerous because it relies on implicit compiler/runtime optimizations that can be accidentally disabled or made non-applicable as code changes:

Who says TCO has to be always implicit? In eg Scheme it's explicit in the standard, and in other languages you can have annotations.

(Whether a call is in tail position is more or less a property you can ascertain syntactically, so your annotation doesn't even have to be understood by the compiler: the linter is good enough. That would catch your 'accidental changes' part.

Things get more complicated, when you have implicit clean-ups happen after returning from the function. Like calling destructors in C++. Then the function call is arguably not in a tail position anymore, and so TCO doesn't apply. Whether this is detectable reliably at compile time depends on the details of your language.)

gleenn 14 hours ago [-]
I would argue having the parameters that change during the loop be explicit is a very nice advantage. Agree that the things can be equivalent in terms of execution but the readability and explicitness, and the fact that all the parameters are listed in the same place is great.
weitendorf 14 hours ago [-]
Agreed. Some people really like FP a lot, and I think it's underrated that the kinds of functions where TCO is applicable tend to be so simple that they are not really that inelegant when expressed imperatively. My true opinion is that relying on TCO is usually choosing ideological adherence to FP (or "code that looks cooler") over reliability/performance/communication.

That said, just as I'd expect experienced C++ programmers to be able to recognize others' code using RVO and be careful not to restructure things to break it, I'd concede that experienced FP programmers might be unlikely to accidentally break others' TCO. It's just that ad absurdum you cannot expect every developer to be able to read every other developers' mind and recognize/workaround all implicit behavior they encounter.

eru 10 hours ago [-]
> [...] and I think it's underrated that the kinds of functions where TCO is applicable tend to be so simple that they are not really that inelegant when expressed imperatively.

I suspect you are only thinking of patterns that are basically equivalent to a loop. I might agree with that.

TCO really shines when you want to implement state machines. See eg https://news.ycombinator.com/item?id=43076088 for an example where using tail calls in Python's interpreter loop gives nice performance benefits. Similar also for LUA.

> [...] I'd concede that experienced FP programmers might be unlikely to accidentally break others' TCO.

Compiler (or linter) checkable annotations would help here. You are right that you want to make it possible for programmers to statically assert somehow that their function call is a tail call.

Btw this reminds me: recursion isn't just something you do in computation, but also in data structures (amongst other things). In eg Rust the members of your data structure are typically just laid out one after another, but when you have a recursive structure (and in certain other cases) you need to box it, otherwise you'd get an infinitely large data structure. Boxing is more or less equivalent to using indirection via a pointer.

However, unboxing isn't really like TCO. It's more like in-lining.

gpderetta 4 hours ago [-]
To nitpick a bit, NRVO is an optimization as there is no guarantee that it will be performed, but RVO is now guaranteed (you can now return temporary non-copyable /non-movable objects for example).
ramchip 15 hours ago [-]
I think you're confusing mutation and variable reassignment?
tombert 14 hours ago [-]
I'm saying that if I do a regular loop, something needs to explicitly mutate, either a reference or a value itself.

`for (var i = 0; i < n, i++)`, for example, requires that `i` mutate in order to keep track of the value so that the loop can eventually terminate. You could also do something like:

    var i = new Obj(); 
    while (!i.isDone()) {
        //do some stuff
        i = new Obj(); 
    }
There, the reference to `i` is being mutated.

For tail recursion, it's a little different. If I did something like Factorial:

    function factorial(n, acc) {
        if (n <= 1) return acc;
        return factorial(n - 1, acc * n);
    }
    
Doing this, there's no explicit mutation. It might be happening behind the scenes, but everything there from the code perspective is creating a new value.

In something like Clojure, you might do something like

    (defn vrange [n]
      (loop [i 0 v []]
        (if (< i n)
          (recur (inc i) (conj v i))
          v)))
(stolen from [1])

In this case, we're creating "new" structures on every iteration of the loop, so our code is "more pure", in that there's no mutation. It might be being mutated in the implementation but not from the author's perspective.

I don't think I'm confusing anything.

[1] https://clojure.org/reference/transients

ramchip 4 hours ago [-]
> There, the reference to `i` is being mutated.

That's rebinding. Mutation is when you change the state of an object. Variables are not objects. You can't have a reference (aka pointer) pointing to a variable.

mrkeen 2 hours ago [-]
I don't know if you're referring to a particular language's peculiarities, but it doesn't really matter. It's mutation.

tombert's point (I think) is that in a functional setting, factorial(n) is always factorial(n). In an imperative setting, first it's 1, then 2, then 6, then 24, etc.

Here's factorial calculated imperatively in C#.

  public async Task Factorial() {
      long primitiveResult = 1L;
      Wrapped<long> objectResult = new Wrapped<long>(1L);

      var observer = new Task(ObserveVariable);
      observer.Start();

      for (var i = 2; i <= 15; i++) {
        primitiveResult *= i;
        objectResult = new Wrapped<long>(objectResult.Value * i);
        await Task.Delay(100);
      }

      observer.Wait();
      return;

      void ObserveVariable() {
        while (primitiveResult != 1307674368000 || objectResult.Value != 1307674368000) {
          Thread.Sleep(100);
        }
      }
  }
There are two results (one primitive and one object) to show that it doesn't matter. Maybe there's no such thing as a Reference-to-a-Variable in whatever language you have in mind, but in the above code ObserveVariable refers to a variable (while it mutates).
1718627440 3 hours ago [-]
> You can't have a reference (aka pointer) pointing to a variable.

What?

    int a;
    int * p = &a;

?
eru 10 hours ago [-]
You could imagine a language like eg Python with a for-each loop that creates a new variable for each run through the loop body.

Basically, just pretend the loop body is a lambda that you call for each run through the loop.

It might make sense to think of the common loops as a special kind of combinator (like 'map' and 'filter', 'reduce' etc.) And just like you shouldn't write everything in terms of 'reduce', even though you perhaps could, you shouldn't write everything in terms of the common loops either.

Make up new combinators, when you need them.

For comparison, in Haskell we seldom use 'naked' recursion directly: we typically define a combinator and then use it.

That often makes sense, even if you only use the combinator once.

gpderetta 4 hours ago [-]

  for(int i : some-container) {
     do-something-with(i);
  }
Where is the mutation?
mrkeen 4 hours ago [-]
It's implied by void-returning do-something-with(...).

Otherwise you're just burning cycles and a good compiler should dead-code eliminate the whole loop.

(see https://news.ycombinator.com/item?id=44873488)

gpderetta 8 minutes ago [-]
We are discussing mutation in the loop itself, but sure:

  for(int x: container) {
      yield frob(x);
  }
mrkeen 5 hours ago [-]
Nope. Loops are unnecessary unless you have mutation. If you don't mutate, there's no need to run the same code again: if the state of the world did not change during iteration 1, and you run the same code on it again, the state of the world won't change during iteration 2.
charcircuit 14 hours ago [-]
>without creating any kind of mutable reference.

The parameter essentially becomes a mutable reference.

eru 10 hours ago [-]
No. There's no mutation happening.

Of course, a compiler might do whatever shenanigans it has to do. But that's an implementation detail and doesn't change how the language works.

(And let's not pretend that CPUs execute something that resembles an imperative language like C under the hood. That might have been true during the PDP11 or a VAX days. These days with out-of-order execution, pipelining etc what's actually happening doesn't really resemble one-after-another imperative code much.)

dgb23 5 hours ago [-]
I didn’t eat the sandwich, I just expressed its transformation into energy.
mrkeen 5 hours ago [-]
If it does, then you would be able to express a race condition with it.

EDIT: I re-read parent comment.

>> the biggest advantage to using tail recursion over vanilla loops is the ability to keep using persistent data structures without any (apparent) mutation.

Yes

>> at least with Clojure's loop/recur stuff it is kind of cool to be able to keep using persistent data structures across iterations of loops

There's the implied mutation.

tombert 14 hours ago [-]
No disagreement on that but that's an implementation detail and from the coder's perspective there's no mutable references.
charcircuit 14 hours ago [-]
From the coder's perspective it is. There's just different syntax for assigning to them.
tombert 14 hours ago [-]
No, from the coder's perspective it looks like I'm passing an argument into a function. I suppose you could say that arguments in general are "mutable references", and I guess that's not necessarily "untrue", but that's not generally how I visualize it.

If I pass a persistent structure into a tail call, it looks like I'm passing a copy into there, in the same way that I might if I did something like `Math.abs(myValue * 3)`; the compiler converts it to a mutable reference but I see it as the same as passing a value.

weitendorf 14 hours ago [-]
From the coder's perspective, there are no mutable references only if the coder does not really rely on or care about the possibility that their code uses TCO. If they actively want TCO then they definitely care about the performance benefits they get from underlying mutability/memory reuse/frame elision.
tombert 12 hours ago [-]
Yeah but that's why I prefer Clojure's loop/recur thing. You're not allowed to have non-tail recursion there, so it's not implicit, so you really can pretend that there's no mutable references.
eru 11 hours ago [-]
> Practically the day after I learned about tail recursion in CS class, I learned that almost all recursive calls can be translated to iteration, that in many cases the iterative version is easier to scan, is as fast if not faster, and that they can usually handle much much larger inputs than recursion due to avoiding stack overflow.

What do you mean by easier to scan? I find (most) loops [0] hard to read, because they typically involve mutable state.

When properly implemented, tail calls are as fast as gotos and loops, and don't blow up any stack. (Not all languages are implemented with a stack in any case.)

However you have one point:

Most of the time, we don't use recursion directly in our programs even in a language like Haskell or Scheme. Instead we define a 'combinator' that encapsulates the specific, limited recursion scheme that we want, and then use that one. This is very similar to how people generally don't use goto directly in their programs.

You might be familiar with the combinators 'map', 'filter' and perhaps 'reduce'/'foldr'. You could re-interpret the various common loop types as such recursion combinators that weaker languages have to bake into their language, because they are not strong enough to express them as a user-defined library. And indeed, Haskell has Control.Monad.Loops (https://hackage.haskell.org/package/monad-loops-0.4.3/docs/C...) which gives you exactly the common loop types as a library.

Some examples of less common combinators are eg 'unfold' or various combinators to traverse trees or graphs.

[0] The foreach-loop over some sequence is less headache inducing than eg a while-loop.

bjoli 15 hours ago [-]
I am in the other camp. I prefer tail recursion and recursion over loops. However: For the simple cases it can and should probably be abstracted away like the racket for loops or my own goof-loop [1].

I just feel that a recursive calls makes state much more clear, but then again I am no fan of mutation in general. In my old python days I think a good 60% of my bugs were due to me being bad at managing state.

[1] https://rikspucko.koketteriet.se/bjoli/goof-loop

iamevn 13 hours ago [-]
I'm in the same boat, recursion tends to be easier for me to reason about because I'm expressing the problem in terms of some base case that incoming parameters are being reduced to rather than some condition that an iterative approach is working towards.
eru 10 hours ago [-]
I prefer recursion over loops. But even more I prefer abstracting away the recursion into combinators.

One of my favourite combinators is matrix multiplication. You define what 'addition' and 'multiplication' mean in your case, and all of a sudden you get an algorithm that computes shortest paths in a graph or does regex matching. See https://news.ycombinator.com/item?id=9751987

But for more bread and butter cases, there's 'map' over various data structures, and 'reduce' and traversals of trees etc.

akdor1154 10 hours ago [-]
There are cases where iteration is clearer, and there are cases where recursion is clearer.

It's well worth being familiar with both - if you learn how to shoehorn both approaches where they aren't ideal, your judgement on avoiding such practices will improve. :)

14 hours ago [-]
zelphirkalt 13 hours ago [-]
I have my doubts about any CS class/lecture, that teaches, that the "iterative version is easier to scan". Might just be the bias or inexperience of the lecturer. By not I find recursive to be often easier to read than some for loop with its loop header and counter that I need to think about and update in my mind. And then the loop usually in many languages does not even have a return value, because it is not an expression, but a statement. Meeehhh, very meehhh in many languages. Not all, but many.

I think maybe in languages like Ruby or Smalltalk a loop can be more readable, because of how they structure it as messages to objects, rather than special keywords in the language.

hinkley 11 hours ago [-]
> I have my doubts about any CS class/lecture, that teaches, that the "iterative version is easier to scan".

Well then you’re in luck because that was not an academic statement but a professional opinion - mine.

You can’t optimize the patterns you can’t see because they’re obscured by accidental complexity. And iteration is a frequent trick I use to surface deeper pattern s.

bigDinosaur 10 hours ago [-]
Things like DFS add a lot of noise in the way of seeing the pattern IMO, but then again if you want explicit stack management and that's the pattern you want to see I suppose the iterative versions are clearer.
CalChris 16 hours ago [-]
I must have missed this class. How does one convert a recursive descent parser into an iterative one?
Jtsummers 15 hours ago [-]
Every recursive algorithm can be made into an iterative algorithm if you use an explicit stack instead of the implicit call stack. It's not always cleaner.

In tail recursive algorithms, there is no stack, it's just a straight loop.

  def foo(state, acc): # if acc is needed
    if base-condition(state): return acc
    return foo(next(state), f(state, acc))
Is simply:

  def foo(state):
    acc = initial
    while not base-condition(state):
      acc = f(state, acc)
      state = next(state)
    return acc
If it's not tail recursive you introduce a stack, for instance a DFS on a binary tree:

  def search(node, val):
    if node is None: return False # empty tree, only need this check once
    stack = [node]
    while stack:
      n = stack.pop()
      if n.val == val: return True
      if n.right: stack.push(n.right)
      if n.left:  stack.push(n.left)
    return False
Note the insertion order is reversed from the recursive calls in a typical DFS. We want left to be searched first and then its children and then we "recur" back to right and deal with its children, so we need to push right into the stack first and then left.

When you have multiple mutually recursive functions (as is likely the case in a recursive descent parser) then things become more complicated, but it's still feasible.

LegionMammal978 15 hours ago [-]
Sometimes the messy translation into an explicit stack and dispatch loop is necessary, if you want to pause the calculation, serialize the current state, and reconstitute it later. (E.g., if you want to add disk-saved checkpoints, a frequent hassle in some of my projects.) Coroutines can get you the equivalent of pausing and resuming for a recursive routine, but I'm not aware of any language that lets you serialize the call stack.
Jtsummers 15 hours ago [-]
> I'm not aware of any language that lets you serialize a whole call stack.

That's basically what continuations provide. Scheme, SML, and others provide them.

LegionMammal978 14 hours ago [-]
Continuations allow an inactive call stack to sit around in memory. But do any of those languages let you save a continuation to a file and resume it in a different execution of the program, without massive contortions to the code? That's what I mean by serialization.
kmicinski 11 hours ago [-]
Seems potentially interesting to explore what would be required to store durable continuations. Feels very related to incrementalization and provenance, as you can see materializing a continuation to disk (whatever storage backend) requiring dependence tracking to do anything other than simply snapshotting the whole program state. I am just spitballing though, not sure if anyone has actually tried this.
tylerhou 12 hours ago [-]
It is much easier and more maintainable to convert to continuation passing style. If you also use defunctionalization to allocate closures on a stack instead of a fresh heap allocation for every closure, you will achieve performance on par with an explicit stack. (In fact, defunctionalization is a mechanical transformation the produces exactly the data you would store in an explicit stack!)

Before I knew about CPS and defunctionalization, I wrote a Python decorator that did exactly the transformation you describe. https://github.com/tylerhou/fiber. Now I know about CPS and defunctionalization, I realize that my work was not the best implementation (but it was a fun learning experience!).

themk 10 hours ago [-]
There is a library for Haskell that will do it. Though it doesn't support all GHC versions. It's very nifty if you need it.

https://github.com/jberthold/packman

spauldo 10 hours ago [-]
If you have to use an explicit stack, you're still doing recursion. Useful if you're implementing the Ackermann function in FORTRAN 77, I suppose. Unless you're worried about running out of stack space, it's generally easier to reason about if you just use regular recursion.

That only applies for non-primitive recursive functions, though. Most people rarely encounter those. For primitive recursive functions, it's all down to the nature of the function and what's idiomatic for your language. If you're using Scheme, for example, recursion is the name of the game.

fellowniusmonk 14 hours ago [-]
When TCO recursion was first developed it was very explicitly called out as a syntactic and structurally improved GOTO but still fundamentally a GOTO that could take params.

Recursion isn't physically real, any book that teaches the abstraction before explaining either the call stack (for non-TCO recursion) or in the GOTO context is a book actively trying to gatekeeper CS and weed out readers. (Not that any of those old pascal turbo/boreland books from the 90s actually shipped code that compiled.)

I had several people on HN of all places try to "teach me" recursion after this simple line inside a larger comment:

> It's like acting like recursion is physically real (it's not) when it smuggles in the call stack.

Recursion IS real abstractly. It's just not physically real, it was developed before we knew how DNA/RNA encoding handles the same issues in a more performant way.

ndriscoll 34 minutes ago [-]
For basic structural recursion on finite data structures, all you're doing is case analysis and substitution (i.e. "plugging in" values). How is that gatekeeping?

Math majors cover hundreds of algorithms per semester often using recursion without thinking much of it. For that matter, same with "higher order functions" (e.g. derivative operators, or even just translation/scaling of a graph). Even in introductory calculus, students cover things like fixed points (e.g. I remember discussing d^4/(dx)^4 sin = sin almost immediately after introducing derivatives).

Thinking in terms of physical machines is useful for reasoning about performance, but seems like a distraction for learning to think logically/correctly, which seems like a more important first step?

SJC_Hacker 9 hours ago [-]
I don't see how it would be gatekeeping.

Recursive functions are a mathematical concept, like the "imaginary" number, or "trascendental" numbers. Or negative numbers for that matter.

Simple example, the Fibonacci sequence. FIB(1) = 1 FIB(2) = 1 FIB(N) = FIB(N-1) + FIB(N-2)

There's no programming language or "physical" implementation needed in order to calculate FIB(N) for arbitrary N. Pencil and paper will do for small numbers

Jtsummers 13 hours ago [-]
> Recursion isn't physically real, any book that teaches the abstraction before explaining either the call stack (for non-TCO recursion) or in the GOTO context

Do you also believe that loops and functions should only be taught after the call stack and goto are taught? Neither of them are real either by your measure.

What a silly sentiment.

fellowniusmonk 10 hours ago [-]
Loops and functions can be physically represented as a stand alone, they can be physically carved onto a mechanical surface and observed.

They don't smuggle anything in conceptually, their abstraction doesn't leave anything critical to their structure out. They are real and can be physicalized as stand alone objects.

I see you've never tried to teach a software class to children or other learners, historically recursion is _very_ poorly taught by those who already understand the concept, but I'm not saying you have to care about that, a lot of people think there are too many programers already.

achierius 14 hours ago [-]
> It's just not physically real, it was developed before we knew how DNA/RNA encoding handles the same issues in a more performant way.

That was a sharp left turn -- how do you figure DNA/RNA are relevant here? I feel like iteration pre-dates our modern understanding of RNA in particular (though I could be mistaken) so I struggle to imagine how DNA/RNA were particularly informative in this regard.

kylec 15 hours ago [-]
You can do it by managing your own stack, but I would argue that doing so makes the algorithm LESS easy to scan and has its own sets of problems.
electroly 15 hours ago [-]
Same way as any other algorithm: with an explicit heap-allocated stack. I have a naive parser where I built operator precedence into the grammar as nested productions instead of using an algorithm like shunting yard. An input of ((((((1)))))) blew the stack. Converted it to an explicit stack and it was fine; deep for the call stack was not deep for my own heap-allocated stack. Nasty code, though--I think this serves as one of the counterexamples to the idea that the recursive code gets simpler when turning it into iteration. The original OP was talking more specifically about tail recursion, which a recursive descent parser won't be.
bjoli 15 hours ago [-]
There are many languages out there that can grow the stack these days. If you have a language that can do tail recursion, it is really a very neat complement. In lisps it means being able to write a list building function in a direct consing way, and still being faster than the TCP-way.
pmg101 16 hours ago [-]
https://news.ycombinator.com/item?id=44837949
jandrese 16 hours ago [-]
By making your own stack and using loops.
javcasas 15 hours ago [-]
Recursion deals with recursive data structures, and iteration deals with "plain old" data structures.

When you use one to process the other, you get into trouble. For example, you need to manage a stack to do iteration on your binary tree. Or you need to construct slices to recurse on your arrays.

It's all about ergonomics.

pdpi 11 hours ago [-]
> For example, you need to manage a stack to do iteration on your binary tree

Recursion around trees can get you into trouble with the stack. Consider:

    def traverse(node):
      do_work(node.payload)
      traverse(node.left)
      traverse(node.right)
The second recursive call to traverse is in tail position and is a candidate for elimination, but the first one isn't and will _always_ eat stack space. This is fine if you know what you're doing, but can bite you in the arse if you're expecting TCO to save you.
javcasas 3 hours ago [-]
TCO stands for "TAIL call optimization". It will obviously not save your arse if the call is not in the tail of the function, exactly the same as "python generators will not help you if you are doing C++".
eru 10 hours ago [-]
What's a plain old data structure?

Linked lists are recursive, but you can use loops just fine to deal with them.

Similarly, it depends on what you do with your trees. If you want to go down a single path (eg to find an element in a search tree), loops aren't too bad.

And arrays don't necessarily ask for loops. Eg implementing quicksort or heap sort with loops is possible (and depending on your language, compiler and target machine might have performance benefits), but I'd hardly call it ergonomical.

Despite all my criticism you are hinting at a good point though: it's often useful to define your data structures together with combinators that work on them. So eg you want to define various tree traversals together with your tree. (Most people are familiar with the combinators 'map' and 'filter' for lists but that's a very small pool.)

Whether those combinators are implemented with recursion or loops is an implementation detail that the user of the combinator doesn't need to worry about too much.

dreamcompiler 15 hours ago [-]
This is overly simplistic.

Is a binary tree recursive from the perspective of a type declaration? Yes.

Is it recursive from the point of view of search? Depends. If you're doing depth-first search, then you need a stack and a recursive algorithm seems natural. If you're doing breadth-first search, then you need a queue and the algorithm is less obviously recursive.

In either case the program could be expressed recursively or as an explicit loop. When a stack is needed, the underlying hardware always provides automatic stack management so recursion feels natural in that case.

When a queue is needed it's more of a tossup since hardware and programming languages don't generally provide automatic queue management, so you're going to need to manage that data structure manually anyway. In which case whether you choose to use recursion or not is more of a personal preference.

eru 10 hours ago [-]
> When a stack is needed, the underlying hardware always provides automatic stack management so recursion feels natural in that case.

Not true at all. Eg Risc-V provides no stack management at all. But compilers and interpreters exist, so it doesn't make a difference to how your high level code 'feels'.

javcasas 15 hours ago [-]
So a tree is easier to do recursing vs iterating in some cases, whereas in other cases recursion and iteration have roughly the same level of complexity.

Kinda shows that recursive data structures are usually better dealt with recursion. Worst case you end up with the same level of difficulty as iteration.

busterarm 14 hours ago [-]
My career is practically built on the fact that other people 99% of the miss the simplest representation of the data/problem.

Just look at myriad ways that people implement something like checking win conditions in Rock, Paper, Scissors. Switch statements, conditional hell, etc.

...you can just do something as simple as stick Rock, Scissors, Paper into a circular list and compare to the next element. It's practically readable as plain English. No programming skill required.

If you need to code golf you can use modulo arithmetic, for a different kind of "simple", but then you lose readability.

eru 10 hours ago [-]
You could have a big pattern-match statement for Rock, Scissors, Paper and it would still be readable (in a language that supports these things like eg Rust).

The circular list seems a bit too cute for me. Sure, it's possible, but seems like overkill.

busterarm 8 hours ago [-]
If you have to do a bunch of list manipulation yourself sure but most scripting languages make these things trivial.

Heck, you could easily write it using Terraform built-in functions.

eru 4 hours ago [-]
What do you mean by 'list' in this case? Python, for example, has a list data structure, but it's different from what C folks would naturally think of when you say 'list'.
Spivak 15 hours ago [-]
I'm in your camp, recursive code is hard for the next reader, which might be you, to understand. It's a bit too magic looking for my taste. If you're looping it should look like a loop, if you're pushing onto a stack you should get to see the stack. It's also safe against modifications which might silently break the tail-call nature of it until you blow out the stack later. It also gives you much saner and more debuggable stack traces because all the state is in the current frame.
busterarm 14 hours ago [-]
I never quite understood this recursion is hard/magic sentiment. Maybe it's because I started my CS education when it was still taught out of math departments or because it started with actually using programming languages to do algebra. Then again, the FP Kool-Aid is practically the blood in my veins at this point I guess.
hinkley 11 hours ago [-]
I’m good at spatial thinking, so on paper I should have zero issues with recursive code. But I also mentor and troubleshoot a lot and deep recursive code blows everyone’s mind. Even, it turns out, often the people who wrote it. Though they will swear otherwise.

Self recursive code takes on a fractal nature - at any call stack you cannot determine the call depth. You are in a twisty little maze.

When you layer calls horizontally, different tasks at different depths, it’s dead obvious where you are in the calculation. If all you can manage though is iteration, you still have the local variables to tell you where you are.

eru 10 hours ago [-]
I spend half my career with Haskell, OCaml, Erlang and we never had these problems with recursive code. (Or rather, we never had these problems and blamed it on recursion. We had to deal with plenty of bad code, of course.)

In contrast, I remember plenty of problems stemming from mutation (and lack of static typing etc).

busterarm 9 hours ago [-]
I certainly understand your perspective and I've seen what you talk about it but I've just never run into problems with it personally...

and yeah, as others said, mutation is often the source of more problems.

Generally though I don't like hard "don't use X" rules. There's a time and place for everything -- it all comes down to what you're prioritizing at a given time. I try not to be dogmatic about anything.

javcasas 2 hours ago [-]
I have seen this quite often. I blame it on the obsession with UML along with the limitations of UML. I/E in UML every one draws boxes that have arrows to other boxes, but no one draws boxes that have boxes inside them. Instead, you draw a box with an arrow going out and then back to itself, hence _it has to be a loop because the arrow does a loop_.

That's why React components are _weird and wrong_, SQL CTEs are _overly complicated_, and functional programming is _impossible to understand_. It's all boxes with other boxes inside them, instead of next to them, and many people can't understand you can nest boxes many times.

rr808 10 hours ago [-]
> IME is a bigger foot gun than

Pretty much true for any functional feature. Great in the classroom, less practical in the real world.

SJC_Hacker 9 hours ago [-]
Trying to be as functional as possible I think has great benefits in the "real world". Functions are very easy to unit test, for one.

Pure functional is difficult, but breaking out the parts of the program which can be pure functional is generally a smart thing to do

If I had drive and ability the to make a programming language from scratch, it would be hybrid imperative/functional, and "pure" functions (no side effects, EXCEPT possibly debug logging) would be clearly marked as such.

alternatex 2 hours ago [-]
Pure functional isn't too difficult once you understand how to put everything that causes side effects on the side. You can write a domain layer in a purely functional manner and feed it data from non-pure sources. It's definitely a shift in thinking for people used to something like OOP, but once you make it, writing software functionally is not difficult. There's really not many ways to approach it, unlike OOP for which there are hundreds of books and approaches.
William_BB 4 minutes ago [-]
I studied FP in college. I currently work on a large low latency C++ codebase. I honestly have no idea how I'd use pure functional concepts everywhere in this context. I'm also probably doing it wrong.
LegionMammal978 14 hours ago [-]
Several people in this thread are saying that tail recursion is superior to imperative iteration in that it explicitly specifies which variables may change in each iteration (assuming a broadly immutable language).

To the contrary, I'd argue that immutability isn't the only alternative to universal mutability: many newer imperative languages (such as Rust) have various forms of controlled mutability, where one must explicitly declare which variables may be modified by imperative assignments.

IME, controlled mutability captures just about all the benefit of immutable languages, without requiring any special data structures or sufficiently-smart compiler analyses to ensure good performance. I've never really understood the desire for tail-recursive versions of iterative algorithms, except for a prior commitment to functional programming.

toast0 8 hours ago [-]
> I've never really understood the desire for tail-recursive versions of iterative algorithms, except for a prior commitment to functional programming.

Tail-recursion and iteration are broadly equivalent right? So picking one over the other is a matter of style and capability.

You can't use iteration if your language isn't capable of it, but no worries, you can use tail recursion.

Similarly, you can't use tail recursion if your language isn't capable of it, but no worries, you can use iteration, at least for iterative tail recursion. Otoh, there are uses of tail call elimination that aren't recursion (or aren't direct recursion) ... that can get akward if your language can't manage it.

zelphirkalt 13 hours ago [-]
But Rust's semantics make it less ergonomic to pass values and especially once you do any mutation, that code path is no longer trivially up for parallelization/concurrency. There one will have to lean into what Rust offers to make it safe again, which brings along more syntax. When one wants to pass values, one needs to implement clone or copy or something, and then explicitly clone or copy or so.

It is a tradeoff one can make, and it lends itself to high performance but it does come at a cost.

eru 10 hours ago [-]
Loops and gootos are just a special case of function calls, that needed to be invented because back in the olden days we had no clue how to design language and write compilers.

I don't understand why someone would want to hold on nostalgically to restrictions we no longer face.

Controlled mutability is a step up, yes. Btw, even languages like Haskell give you plenty of mutability to play with, if you want to. (Eg MVars and TVars are a thing.)

You are right that replacing a loop one for one with a tail recursive version doesn't give you much benefit either way. (Though it does make my life easier, because you don't have to contort my brain around keeping track of mutation.)

However that's a bit of a straw man and misses the broader picture.

What you'd want to do is define combinators that encapsulate the specific pattern of computation you want to implement. The most commonly known combinators are things like 'map' and 'filter'. But there are plenty more useful ones you can define, like various tree traversals or whatever makes sense for use cases and data structures.

Whether those combinators are implemented with tail calls or with loops or gotos or whatever is an implementation detail that their users don't need to worry about.

I know of one major use case where you'd want to use tail recursion directly: state machines and interpreters. See https://news.ycombinator.com/item?id=43076088 for an example of the latter. See https://en.wikisource.org/wiki/Lambda:_The_Ultimate_GOTO for an example of the former.

tombert 16 hours ago [-]
I'm a pretty big functional programming nerd and I want to like tail recursion, but I honestly kind of feel like I agree with Guido on it, which is that it kind of breaks the typical stack-trace patterns.

I have kind of grown to prefer Clojure's loop/recur construct, since it gives you something more or less akin to tail recursion but it doesn't pretend to be actually recursive.

dreamcompiler 15 hours ago [-]
Clojure does it this way because the JVM stupidly doesn't support tail call optimization.

It is true that TCO messes up your stack traces. It is also true that loops mess up your stack traces, because they don't even create a stack trace. In many languages that support TCO, TCO can be turned off for debugging and enabled for production, so Guido's characterization of the issue is ridiculous.

tombert 14 hours ago [-]
I know that's the origin of why Clojure does its manual loop/recur thing, but I've grown to kind of prefer its explicitness. There's no ambiguity on whether it will TCO; it simply won't compile if it's not tail-call.

I don't agree that being able to turn off TCO in debugging is enough to call Guido's characterization "ridiculous". I often have to debug running production systems, and those are (hopefully) not deployed with debug compilations.

You're not wrong about loops not creating a stack trace; I guess what bothers me (and maybe Guido) is that with loops there's no expectation of a stack trace, while with a function call there is.

titzer 14 hours ago [-]
> It is also true that loops mess up your stack traces

No, because an indirect tail-call eliminates the calling frame. After, it looks like the caller of that function called a function it did not call. In fact, with tail-calls a whole pile of steps gets telescoped down to nothing[1]. This is not the case with a loop, it doesn't jump to itself indirectly. Loops don't jump into the middle of other loops in other functions. We can still follow the chain of control flow from the start of the main function, through (non-tail) calls, i.e. the stack, through the start of the loop, and then from some backedge in the loop body we're looking at.

Tail-calls are absolutely harder to debug in general than loops.

[1] In the limit, if the entire program was transformed into CPS with tail-calls everywhere, the original execution stack is now strewn throughout the heap as a chain of closures.

dreamcompiler 11 hours ago [-]
> Tail-calls are absolutely harder to debug in general than loops.

I like TCO because I often prefer the expressiveness of recursion over loops. When I need to debug, I turn off TCO. (I'm talking about Common Lisp.)

I agree TCO definitely makes the compiled code not look like the source, and this lossage doesn't happen with either regular recursion or with loops. But various other kinds of optimizations also have that problem.

If you're looking at the assembler output you'll see JMPs where there would have been JSRs without the optimization. So knowing that helps a bit. You just lose the automatic history preservation the stack gave you for free.

Time travel debugging might be an answer here but of course that can be much more expensive than simply keeping JSRs in place.

bee_rider 10 hours ago [-]
Although, a correctly shown stack trace for a recursive function is also pretty “messed up” in some sense. Who wants an O(n) stack trace? I guess most tools are not going to display it nicely, haha.
kqr 8 hours ago [-]
More exciting in my mind than the equivalence between tail recursion and while loops is the equivalence between left folds and for loops. A for loop in its most general shape is something like

    <whatever state the application is in here serves as the starting state>
    foreach (i in range(n)) {
        <the state of the application is updated with each iteration>
    }
    <the desired state of the application has been applied when the loop is done>
An equivalent left fold is invoked like

    <desired state> = foldl (
        <state updating procedure>, 
        <starting state>,
        range(n)
    )
The difference is that the starting state is given explicitly to the loop, and the desired state is returned as a composite value. This makes it easier to troubleshoot than the implicit state transitions, but it's essentially the same still.
munchler 15 hours ago [-]
Every developer should know how to write a tail-recursive function and understand why it is equivalent to a loop. That said, tail recursion is rarely needed in modern functional programming. For example, why write out a recursive function when a call to `fold` or `map` will do the same thing?
kmicinski 15 hours ago [-]
I agree with you--that's a topic I will definitely cover in my blog, too. You make a good point: I know some folks who worked at big financial orgs, writing hundreds of thousands of lines of code, and never wrote general-recursive functions (only used simple recursors like foldl).
xdavidliu 15 hours ago [-]
it's not entirely true that it does the same thing: even if it gives the same result. For many programming languages, fold and map can only act on non-lazy data structures, so require O(N) memory for the data that needs to be folded over, while tail-recursive functions usually only use O(1) memory, even stack memory.

Notable exceptions to the above are python3 with generators, which I believe truly use O(1) memory with map and fold. Haskell has lists that are lazy by default, but if you fold or map over them, it still "forces the thunk" for each element and thus you still end up using O(N) memory.

mrkeen 6 hours ago [-]
* I don't think you meant to compare forcing each element (as opposed to forcing the input or output data structures)

* If you did mean it that way, I doubt Python can avoid forcing each element that goes through its generators. (Without, say, thunking up each element manually or using a generator of generators, etc.)

Here is Haskell using a fold and a map. It does not force the input, the output, or each element:

  main =

    let lazyInput = zip ([1..] :: [Int])
                        (repeat (error "boom"))

        lazyOutput = foldr (:) [] lazyInput

    in print (sum (map fst (take 10_000_000 lazyOutput)))

  > 50000005000000
  > 9 MiB total memory in use
  > real 0m0.062s
munchler 12 hours ago [-]
I would hope that most standard libraries are optimized to avoid this sort of waste as much as possible. In F#, for example, I know that the standard `map` and `fold` are implemented imperatively for maximum performance.
lgas 11 hours ago [-]
...although list fusion often saves you if the thunks were going to be forced eventually anyway.
zelphirkalt 13 hours ago [-]
I wouldn't say "rarely", unless you have a whole host of other higher order functions at your disposal for more special cases than map and fold. There are many cases, where you don't want to fold or map over the whole data structure and want to exit early with a result already. Writing tail recursive functions is still very common.
munchler 12 hours ago [-]
> I wouldn't say "rarely", unless you have a whole host of other higher order functions at your disposal for more special cases than map and fold

That is my point. Modern functional languages do have a host of higher-order functions for exactly this sort of thing. For example, here is F#'s `Seq` module, for working with lazy sequences: https://fsharp.github.io/fsharp-core-docs/reference/fsharp-c...

> There are many cases, where you don't want to fold or map over the whole data structure and want to exit early with a result already. Writing tail recursive functions is still very common.

I think this can usually be handled more concisely by combining higher-order functions instead. For example, if you want to fold over a partial data structure, you can use `filter` to select only the elements you want, and then fold over that subset. Or, if you want to exit early from a map, you can use `takeWhile` and only map over what's left.

Real-world functional programming is usually about combining these built-in tools effectively, rather than writing new tools from scratch.

zelphirkalt 4 hours ago [-]
While you are right about being able to combine higher-order functions in the way you describe, I might not find your examples compelling, because they require 2 passes over the data.

On the other hand one could argue, that one pass of those is checking some condition (index below some number, or element followed by element that satisfies a predicate, or similar things), and the other pass is then blindly applying some other function, without having to check the condition again. Maybe that is in the end equal in performance.

mrkeen 24 minutes ago [-]
> I might not find your examples compelling, because they require 2 passes over the data.

Are these the examples?

>> For example, if you want to fold over a partial data structure, you can use `filter` to select only the elements you want, and then fold over that subset. Or, if you want to exit early from a map, you can use `takeWhile` and only map over what's left.

These are done in a single pass. And if they weren't, I'd stop using them.

Eji1700 15 hours ago [-]
> For example, why write out a recursive function when a call to `fold` or `map` will do the same thing?

Yeah this was a big help when I started F#. Basically "if you're using the rec keyword, you're probably missing something" and hell that even goes for a lot of uses of fold, from the beginners perspective.

kazinator 12 hours ago [-]
Tail recursive functions are loops when tail calls are jumps.

Tail calls being jumps is the key insight.

Not all tail calls are recursion!

Students are confused by recursion. Some may be confused by tail calls.

Don't mix two confusing things together when teaching.

(Tail calls get mixed up with recursion because recognizing and treating/optimizing recursive tail calls to create loops an important highlighted category. In a compiler that doesn't optimize tail calls, optimizing self tail calls is an important priority (if that is easier to do than a general treatment). Many common algorithms are expressible in just one function that calls itself, and there are cases in which some or all of those calls can be tail calls; e.g. binary tree traversal.)

The order should probably be: teach recursion first. Then when students have firmly wrapped their heads around it, introduce tail calls. Students will gain the skill of knowing what is a tail call and why it is important, and recognize which calls tail calls. Secondly, the harder skill of taking recursion that uses non-tail calls and restructuring it to make tail calls instead.

Animats 9 hours ago [-]
Did Python ever get tail recursion? There was a big controversy years ago. Guido didn't like it. But apparently something went in.

Enthusiasm for tail recursion comes mostly from LISP and LISP-adjacent people - those who learned to program from SICP.[1] This is neither good nor bad. Even MIT doesn't use SICP any more, though.

(The classic 20th century "programming should be hard and programmers should suffer" books:

- Structure and Interpretation of Computer Programs, Abelson and Sussman.

- Fundamental Algorithms and Semi-Numerical Algorithms, Knuth.

- Algorithms + Data Structures = Programs, Wirth.

- A discipline of programming, Dijkstra.

These are mostly of historical interest now, but at one time, knowing those defined a real computer scientist. Today, it's more important that your LLM has been trained on them.)

"Love the lambda." - footer of original article.

[1] https://archive.org/details/Sicp.2nd

qsort 8 hours ago [-]
Python still doesn't have tail recursion, and uses a small stack by default.

I'll note that in modern imperative languages is harder than it looks to figure out if calls are really in tail position, things like exception handling, destructors etc. interfere with this, so even as a SICP fanboy I'll admit it's fair enough that some languages don't want to bother.

masklinn 8 hours ago [-]
> and uses a small stack by default.

It does not, it sets low recursion limit which is a rather different thing.

masklinn 8 hours ago [-]
> Did Python ever get tail recursion? There was a big controversy years ago. Guido didn't like it. But apparently something went in.

No. The only thing that went in is moving Python frames off of the C stack, but since afaik the recursion limit did not change it has essentially no impact on user code unless you play with said recursion limit.

Mikhail_Edoshin 9 hours ago [-]
Everybody knows that "goto is harmful" but who has noticed that the very same paper says that loops are technically unnecessary?

Loops are merely a special case of recursion. The reason languages have loops is that reifying these cases to language constructs simplifies code generation. The downside is that it muddles the logic of computation.

mehulashah 14 hours ago [-]
Tail recursion is beautifully deep and simple. It (and as a corollary CPS) makes clear what state matters to your loop (and function) and avoids extraneous variables in loops as well as implicit unneeded state in functions. It also makes it easier to show the correctness of your loops. Sure, there are other functional language constructs like fold and map, if your problem is amenable to them. Tail recursion is more general and simpler.
lkuty 52 minutes ago [-]
And a must-have in Erlang/OTP (and Elixir of course) where you can "loop" passing the altered state as a parameter. I love it. And wish that all languages implement TCO. That's one of the reason I also like Scheme.
eru 10 hours ago [-]
I have almost the opposite view: tail calls are great.

But most of the time you will want to express your program in terms of more restricted combinators, because restriction makes the readers job easier. Easier to understand, and easier to convince yourself that no weird things are happening.

Basically, you might already agree that restricting mutation is good. This is the same principle.

So when you see a 'fold' you know even without looking at the callback, that your program will run through the whole list and not exit early. When you see a 'map' you also know that no early exit will happen, but even more you know exactly how the return value will be constructed (whereas for fold that's arbitrary).

However you are right that 'fold' and 'filter' and 'map', just like 'for' and 'while', are not good as universal building blocks.

That's why you should define new combinators when you need them. Typically, that's when you define new data structures, eg when you define a tree you will also want to define a 'map' for it, and also various tree traversals and perhaps searches. Less typically, you also want new combinators for new algorithmic ideas, even on old data structures.

Eg matrix multiplication is an interesting combinator. You provide the meaning of 'addition' and 'multiplication' as call-backs, and depending on your choice you get an algorithm for finding shortest paths or for matching regular expressions etc (in addition to the obvious matrix-multiplication on real numbers, of course). See https://news.ycombinator.com/item?id=9751987

It's relatively seldom that you want to use 'naked' recursion. The main example I can think of is when implementing state machine or interpreters. See eg https://news.ycombinator.com/item?id=43076088 for the latter, and https://news.ycombinator.com/item?id=43076088 for the former.

mehulashah 8 hours ago [-]
I learned recursion in Pascal and had a ball implementing innumerable algorithms purely in recursive structures. The code was often more compact and simpler to explain — once you understood induction. I agree that combinators are helpful, but they’re not universal. Anyway, we’re probably violently agreeing.
eru 4 hours ago [-]
Yes, Pascal (as least the version I know) weren't exactly friendly to passing around first class functions. It might be possible, but it ain't pretty.

I'm not sure Pascal does closure well?

A combinator heavy style works best in a language (and ecosystem) that welcomes it, like eg Haskell.

eru 11 hours ago [-]
Loops are a special case of tail recursion which is a special case of recursion [0] which is a special case of calling functions.

However, loops aren't the only use for tail recursion. Another classic example are state machines as described in the 'Lambda: the ultimate goto' paper. See eg https://en.wikisource.org/wiki/Lambda:_The_Ultimate_GOTO

[0] Well, recursion on functions. You can also have eg recursive data types or recursion in mathematics in general. But let's focus on functions.

Kranar 9 hours ago [-]
I think it's the other way around. Tail recursion is a special case of loops. All tail recursive functions can be rewritten as a loop but there exist loops that can not be rewritten as a tail recursive function.
mrkeen 6 hours ago [-]
I'll bite.

What's an example of such a loop?

eru 4 hours ago [-]
> All tail recursive functions can be rewritten as a loop but there exist loops that can not be rewritten as a tail recursive function.

And the other way round: how do you (sanely) write a state machine as a loop?

See https://en.wikisource.org/wiki/Lambda:_The_Ultimate_GOTO for some examples.

I agree that (most) functions that only ever tail-call themselves can relatively easily be expressed as loops, if for some reason you really love loops. But state machines aren't like that.

With tail calls, you can represent each state as a function, and you can pass different parameters to different states. Very neat, and just works.

ninetyninenine 1 hours ago [-]
All recursion can be replicated with a loop and a stack.
ChrisMarshallNY 5 hours ago [-]
In Swift, we use guard clauses a lot. I think that guard can compile well (not just in recursion). This posting helps me to understand why.
odyssey7 14 hours ago [-]
High-profile places where this is sadly not implemented include:

- V8 JavaScript Engine

- Python

aryonoco 15 hours ago [-]
For what it’s worth, in F#, tail recursive functions are turned into equivalent loops by the .NET CLR.

I actually like the compromise. I get to write safe functional code while getting all the benefits of a highly optimised iterative operation.

15 hours ago [-]
username3 15 hours ago [-]
> I wrote up a detailed blog post about tail call optimization in Elixir/Erlang and its performance. The TLDR; sort of is that none tail call optimized recursive functions (body-recursive) can be faster and more memory efficient than TCO functions. This is something that I never thought before, that TCO is always faster seems to be a common misconception. My example is also not contrived - it’s an implementation of map.

https://pragtob.wordpress.com/2016/06/16/tail-call-optimizat...

tylerhou 14 hours ago [-]
I disagree with the title; loops are tail-recursive functions, but tail-recursive functions are not loops (in the sense that squares are rectangles, but rectangles are not squares).

It is true that every tail recursive function can be converted into a semantically equivalent loop via a transformation like CPS (which the author mentions). However, for mutually tail-recursive functions, this conversion loses control flow information. This is because after the CPS transformation, calls to the other function become calls to a continuation; this call usually must be implemented as an indirect jump. On the other hand, mutually tail-recursive functions can call each other with direct/statically-known jumps.

This loss of information might appear trivial, but in practice it has some important consequences. Classic examples are interpreter loops. It is well-known that computed gotos can result in modest to large speedups for interpreters [1]. The reason why is that computed gotos create an indirect jump per opcode, so a branch predictor can take advantage of correlations between opcodes. For example, looking at Python disassembly, the header of a standard range for loop compiles down to three opcodes: GET_ITER, FOR_ITER, STORE_FAST in sequence [2]. A branch predictor can recognize that the target of the "FOR_ITER" indirect jump will likely be the "STORE_FAST" instruction pointer; it cannot predict this in the naive implementation where jumps for all instructions are "merged" into a single indirect jump / switch at the top of the loop body. In this case, computed goto is effectively equivalent to a CPS transformation whose closures require no storage on the heap.

Suppose, however, we know even more information about the instruction sequence; for example, we know ahead of time that every FOR_ITER opcode will be followed by a STORE_FAST opcode. We could completely replace the indirect jump with a direct jump to the instruction pointer for the STORE_FAST opcode. Because modern branch predictors are very good, this will have about the same performance in practice as the computed goto loop.

However, consider the limiting case where we know the entire instruction sequence beforehand. If we write our interpreter as many mutually tail-recursive functions, with one function for every instruction, an optimizing compiler can replace every indirect call with a direct (tail-recursive) call to the function that implements the next instruction's opcode. With a good enough optimizer / partial evaluator, you can turn an interpreter into a compiler! This is known as the first Futamura projection [3].

To see an example of this in action, I wrote a prototype of a Brainfuck compiler via the Futamura projection; it uses LLVM as a partial evaluator [4]. The main interesting function is `interpret`, which is templated on the program counter / instruction. That is, `interpret` is really a family of mutually tail-recursive functions which statically call each other as described above. For short Brainfuck programs, the LLVM optimizer is able to statically compute the output of the Brainfuck program. (The one in the Godbolt link compiles to a loop, likely because LLVM does not want to unroll the mutual recursion too much.) You can play around with different Brainfuck programs by modifying the `program` string on line 5.

[1] https://eli.thegreenplace.net/2012/07/12/computed-goto-for-e...

[2] https://godbolt.org/z/rdhMvPo36

[3] https://en.wikipedia.org/wiki/Partial_evaluation#Futamura_pr...

[4] https://godbolt.org/z/WY4j931jf

ethan_smith 7 hours ago [-]
The analogy is inverted - tail-recursive functions are a superset of loops (rectangles include squares), not vice versa. Loops can always be expressed as tail-recursive functions, but tail-recursive functions (particularly mutually recursive ones) can express control flow patterns that simple loops cannot without additional machinery.
kmicinski 10 hours ago [-]
I appreciate your thoughtful criticism of the post, to my eyes everything you are saying is correct.
wagwang 14 hours ago [-]
Normal recursion is just a loop and a stack, turns out, if you can optimize recursion without a stack, it just becomes a loop.
Mikhail_Edoshin 8 hours ago [-]
It is still the same stack, but it reuses the existing stack frame.