Interview With John Earnest
Mar 2026 - Alex Alejandre

John Earnest (Internet_Janitor) crafts creative, empowering projects. Powering projects “impossibly complex for the platforms they’re running on” like dp and Octoma, his project Decker reminds many of HyperCard, discussed on the Arraycast. iKe is a sound and graphics platform for oK, with example programs like Asteroids on the bottom right.

In this interview, we discussed array languages (K and Lil), language design and creativity.


Where should we start? Oh! A chicken just jumped in your lap, cute! What’s her name?

Galina, like the mineral. I had two, but her sister passed away; these things happen.

Most of my professional experience is with K, often regarded as a successful fusion of APL sensibilities and some Lisp ideas. While pretty much every other APL has the notion of strictly rectangular arrays as the data carrier, in K, nested tree-like lists are no problem. K’s syntax is essentially m-expr like Lisp was originally intended to be.

How do you architect large systems in K? I’ve only read big APL systems where short, elegant primitives drown among many long names and even a single declaration per file like the worst Java excesses. Is there a way to avoid this?

In Lisp, everything intentionally looks like it’s made out of the same material. Racket or Clojure have a bit more syntax than just s-expr but macros intentionally look the same as a function, both good and bad depending on perspective. The crystalline quality of an APL-family language is diminished when you build a huge codebase and it starts to resemble more conventional languages.

In the K and Q codebases I’ve worked on, you have subsystems broken into their own files, excessive commenting… Local names remain short but module-level names tend longer. The overall architecture looks the same as if you’d implemented it in any other dynamic language, but the business language shrinks down into tiny knots; k expresses what’s a page of code in another language, in 2-3 lines, so you’re left looking at names.

APLs and Lisps are both very high-level, expressive languages, but in a Lispy language you learn to build up a set of abstractions and grow the language in the direction of what you’re trying to do. In the APL languages, the ideal is programming without abstractions. You write the program directly in terms of the language. Design discussions often focus around what ought to be a primitive vs. an idiom (simple composition). Indeed, some strident APLers advocate against libraries as files of code you import, but rather think of them as a file of snippets you can modify to do exactly what you need.

In the novel The Mote in God’s Eye, the engineer caste would never just build a chair, rather they’d build the precise chair one person needs at this exact time. It’s different way of approaching code reuse: Instead of taking for granted and assuming everything inside this encapsulation works and has a reasonably narrow interface vs. decomposing, adapting and blending it to the intended system. Certain problems have enough irreducible complexity to them that it’s difficult to take this approach; a web browser has to deal with enormous, complicated, evolving standards and there are material reasons to want the codebase to correspond to the structure of the specification. But if your task isn’t making a standards compliant browser but building something with the function of a web browser, you have opportunities for tearing down preconceived ideas and simplifying the problem. This philosophy is quite compatible with Forth where you also program without abstractions.

As an aside, high and low level languages are a false dichotomy. Paul Graham’s Blub article about Lisp posits a linear scale of goodness in language, but it’s at least a lattice. Some languages are objectively better or worse designed, but once you add a context and something you want to do with the language, there are multiple good languages, multiple dimensions of expressiveness and rigidity, rigor and plasticity, which can benefit different domains.

How does one determine their context? It’s quickly “non-technical.”

That’s what programming’s about. Whether you’re an artist choosing interesting constraints for yourself or you’re an engineer receiving a set of constraints you need to satisfy: Does it need to be fast, deploy on a tiny or exotic device, be maintained for a long time by many people or should it be portable to things which don’t yet exist? There are many dimensions to think about.

A controversial opinion, in many contexts static type systems are overrated. They obviously excel in certain contexts: If you’re making something irreducibly complex which won’t ever fit in a single person’s head, then expressing the constraints in a machine readable set of schemas, solves coordination or social problems. But when programming in the small, when things are changing rapidly, conciseness and immediacy shine and types constrain your thinking unnecessarily.

Lil’s error model for human-scale systems applies perfectly here. Redefining your problem and tools lets you ignore entire categories of complexity.

To clarify in context, Lil has syntactic errors but no runtime errors, so every well-formed Lil program has defined behavior even if silly or not necessarily matching what you hope. The idea’s that adding constraints to say “this operation is invalid or erroneous”, you are making the space of valid programs sparser and further apart. If your goal is as much concision as possible, you want to pack as much meaning in as you can.

I wouldn’t recommend anyone build a million line codebase in Lil, but it’s pretty good at solving problems in 50 lines. In the procession of ideas from k, q and the APL family, their terseness is a good user interface at a REPL. With programming constructs letting you write without explicit loops nor conditionals, you have a piece of code with a cyclomatic complexity of one and can test it interactively in the REPL as you build it. You have different kinds of assurances than in another language with different techniques. Of course, it doesn’t always work that way.

Anytime I add a new library to the Decker ecosystem or a new operator to Lil, there’s a fun design problem of how to make it useful in as many situations as possible. What does it mean to apply this to a dictionary, list or table? How can you unify them or simplify the application logic?

How do you actually try to answer those questions, iterate and get a tighter fit?

My programming philosophy is using the language a lot. I build applications, look for patterns with a lot of repetition or difficult/clumsy to express and any time many people resort to a library, the question’s whether this is so open-ended, application-specific or unclear about the right way to do it that it should be in a library or should it be in the language itself?

I caution language designers not to do too much cold golf in their language. People into PLT do a lot of fun puzzles, but puzzles have recurring themes less represented in practical applications. You can go crazy adding features for code golf, but every language has a complexity budget. You hope you spend it on beneficial things. Lil spends a lot of that budget on its integrated query language with first class tables, which comes in handy in many domains.

How do you find projects or ideas for applications?

Having Decker as a rapid prototyping system, I just get ideas of what I can build and sometimes realize I need a missing dependency too! Last year, I wrote a fun little program to generate Valentine’s Day cards with clip-art as PDF files to print out and fold. So I had to write a PDF generation library, because I didn’t want to develop a cross-platform printing API. A controlled subset i.e. generating PDFs, isn’t so bad and every platform has working PDF viewers with print functions!

There are so many things me or my users might want to build which often fork into further problems. A lot of it’s just building tools and games. I made a graph plotting library for my personal budgeting application. If I need to add a new feature, editor mode’s one click away.

It’s difficult to help new Decker users understand documents and applications don’t need a hard separation. The data lives in the deck. The deck is an application. You can implement your own undo, redo, save and load or just leverage them as properties of the environment itself. If you want to write a CRUD application, plopping a grid widget onto a card which already supports adding and removing by rows, sort by column etc. You can switch between programming, editing, drawing and using the application.

What really makes me happy is when users take and modify some deck or card to suit their preferences, rearranged things, changed the font, drew on the back of cards. Wouldn’t it be great if more of our applications today tolerated that kind of customization?

It’s so empowering!

We have a small group of people who really like Decker and build stuff. It takes time to find and collect the people who will have a lot of time with it. Over time, I’m also improving the tool. I’ve made a lot of performance improvements over the 3 years it’s been public. In the last year, we added partial native Unicode support so it’s suitable for people speaking other languages.

How do you cultivate a community? Even individual applications built on Decker have communities!

I’ve always felt the responsibility to answer all questions I can. As the designer of Lil, I’m the world’s foremost Lil programmer and I wrote all the documentation. If the documentation is unclear or incomplete, I can fix it live! Having the patience to answer beginner questions is the only way to teach others to use the system. By leading by example, answering questions and giving advice, others have begun to do the same and answer questions too! We have a growing group of enthusiasts with their own ways of approaching things. I think that’s really great.

Making the Decker ecosystem as appealing and useful as possible to people who don’t conventionally consider themselves as programmers is very important. We have a lot of artists, writers and other creative people looking for ways to make their ideas interactive. I want to make Decker as useful as possible without any programming. If they’re willing to learn a little bit, I want them to then get a lot out of that.

I’ve always thought of programming as a wonderful, expressive medium. The act of writing a program about something is a way of learning about that something. If I want to make a program which helps generate audio, I’ll learn a lot about digital signal processing, sound, maybe music theory. I want as many people as possible to at least have the opportunity to dip their toes.

How can we help people see programming as a tool of exploration and learning?

Typically, I approach this in the same way as language design: Start with applications first! Learning to build anything requires you to get some rough, incomplete, flawed understanding of your domain to get it to work at all. If you solve several sets of problems in a domain, you’ll see the shape of things you have to do in multiple places. Writing a program requires making the details more concrete than merely writing about it on paper, leading to crisper understanding. This is why programming really interests me.

We can solve things in a human way, a statistical way or with engineering-esque rules of thumb, but turning them into algorithms we can truly understand them! (It’s unfortunate how the popular use of the word algorithm inverted its original meaning. As a computer scientist, it means totally known discrete steps to carry out something while the popular understanding sees a black box who inconceivable inner workings.) Learning to drag heuristics, rules of thumb and statistical approximations into discrete, understandable programs is lovely, intellectually satisfying! But not everything has to be that way either. You can make toys, games, puzzles without understanding everything; it’s just about the effect you’re trying to achieve like a magic trick or a film.

How does Lil help people write programs?

I think of Lil as a less beautiful language than K. It’s a set of pragmatic compromises. K is a very pragmatic language among the APLs, refined in the crucible of high frequency trading and quant work; it had to be plastic and fast with the tool box for solving those problems. When you look at K, the irregular aspects of the design will become apparent when you realize what specific thing Arthur needed to do.

K has a regular adverb set i.e. higher order abstract iteration features with great symmetry. You have over / (reduce) scan \ (reduce where you keep intermediate steps) each right /: and each left \: then each ' and each prior ': which is specifically for zipping a sequence against a 1 shifted version of itself. It just turns out that’s really useful in its domain (running differences, changing ratios) and common enough to warrant an adverb form!

Lil is a stripped down, less beautiful version of Q from a certain perspective. It leans more heavily on tokenized names than symbols, relies less on overloaded meanings. In k, most symbols on your keyboard have at least 2 meanings (monadic and dydadic, taking one or two arguments, not a monoid in the category of endofunctors). In Lil, they’re pealed apart; an operator is either an infix dyad or a prefix monad. K has flexible projection (partial application) and currying support, which Lil has less of. Lil has less adverbs.

Basically, I tried to make Lil less scary for beginners. When you get better at an APL, you begin to appreciate symbolic notation and how it lets you see words and phrases in a few symbols which have a unitary meaning together. With keywords, you lose the ability to grasp an algorithm from its shape.

But Lil looks like Lua to be approachable for a beginner who’s used Python, JavaScript or anything dynamic. But secretly it’s a pure functional language; all the built-in data types are immutable, you can do equational reasoning, it has tail call elimination. Secretly inside of that, it’s even an array language. It’s designed so people can approach it and use a tiny, narrow subset in an unsurprising way without hammering them with how exotic and exciting it is. But the deeper you go into it, it lets you use so many styles of programming in one language! The hope I have is that people who ease themselves into Lil will find the language has a lot of headroom to grow and take on more complicated things. And also, there are so many things I’d find intolerable to not have access to like implicit vector arithmetic; so many Decker APIs are built around pairs like positions on screen which you can just add together! You very rarely have to do something on the x-coordinate of an object then on the y-coordinate, nor iterate over the points in a polygon then draw the lines together because the canvas drawing API accepts an entire polyline. I can generate an entire polyline without a loop because I have range operators and implicit arithmetic.

There’s some sleight of hand making the simplest things people want to do as easy as they can be; the power in the language does not come at the expense of scaring people off.

Arthur Whitney didn’t do the first part, he sought pure power and speed! How did you go about learning K? I used What about k?, kcc and your oK manual!

I had bounced off J in my earlier career. The J manual was a really slow burn, telling you how to do conditionals or what a number is which felt like any other programming language. But I read about k and the impending kOS, about an extraordinarily effective programmer writing human-scale programs and I found it romantic. I wanted to play with k but didn’t have an interpreter, but I was able to find the manual. So I literally read the K2 reference manual front to back a few times and implemented an interpreter behaving as described. Of course, looking back that was a mess and I made many mistakes but it was a good way to learn the language.

Of course, I did many of the things anyone else does when learning a language: Do some Project Euler, do some puzzles and gradually tackle more complicated things. I ended up building this system called Ike, a graphical programming toolkit you can wire to input events, draw polygons and bitmaps on the screen, play sound. So I implemented some arcade games, trying to do more open ended things instead of batch mode programs. Eventually I got some opportunities in consulting and technical writing which led me to professional k work. It’s quite difficult to work your way into the places these languages are used; but they understand they won’t get candidates deeply familiar with these languages. They’re looking for people willing to give it a try in spite of it not looking like Java. There’s also some luck involved which I try to pay forward.

You really do pay it forward empowering people with Decker, oK etc. What inspires you?

I think creative expression is really important. Computers, networks and the web give us enormous opportunities to exercise that creative expression. I want programming to be seen more as a liberal art than something career-oriented. I’m always frustrated by politicians discussing computer education in terms of earning potential; we don’t discuss algebra, social studies or arts programs under such perspectives. I don’t think everyone will enjoy programming, but I think many would benefit from learning a little bit about programming and maybe discovering they enjoy it. Distilling it down, we should consider computing as an art.

The art of programming’s always been intertwined with the needs and preferences of industry. But there’s so much in programming that’s like poetry and sculpture; you’re choosing your words carefully, you’re defining space. But when making a webapp for your e-commerce company, it may be hard to feel it. Nevertheless, there’s this seed of elegance, of intellectual joy to programming, the philosophy of writing programs as a vehicle to understand things about the world.

There’s always the curse where if programming is an art, it’s not very accessible. It’s easier to appreciate a beautiful painting, perhaps more cerebral to appreciate certain kinds of poems, but you really need a lot of technical grounding to appreciate a beautiful program, which doesn’t diminish it as an art. If anything, it levels the challenge at us: How can we make this more accessible? How can we communicate this to a broader range of people? I don’t know the answers but it’s a pursuit worth pursuing.

How did you learn computing? What happened until you could implement a language described in a manual?

My family got access to our first computer, which I had a limited ability to play with. I didn’t have the internet nor any mentors, but I had a library card and could read unclear, hideously out-of-date books about the same introductory topics until something would click. I spent a long time writing atrocious BASIC more ambitious than my skill level. It just takes a long time. But so many things would have been easier with a mentor to tell me what to read or try, which motivates me to help others.

It’s so rewarding when I hear someone who’s used one of my tools or projects say it was rewarding or inspired them to go on their own path. I try to give people a better time than I had.

How would you mentor a little you today, who for lack of a better goal would like to engage with your current work.

I would get myself programming in Processing and shown The Nature of Code, starting with motivating these ideas of computational geometry and computer graphics with little agents interacting in simulated words. I wanted to do so many things in BASIC but didn’t understand physics, geometry nor have an expressive enough language to play with agents in a simulation. There’s a lot of hard-won stuff as I worked my way through different programming paradigms. I think we all encounter a gateway drug that changes the way we think about things. Astrachan & Wallingford wrote article called Loop Patterns breaking down and categorizing conventional for and while loops into abstract strategies like loop and a half where you prime the pump on something then continue the loop or filter things. At the time, it gave a profound realization that these control structures are like lower level components for higher level ideas. When I discovered Python’s Itertools while working as a Java programmer (pre-Javastreams) I learned a lot about iterators. This stuff slowly got me more into a functional, abstract way of describing logic until I was ready for J or k.

I don’t want to give the impression that people using imperative languages are mentally stunted, but learning to think in larger pieces requires thinking in more abstract patterns of programming. My favorite thing about K is how I can go on a walk and think about the solution to a complicated problem in terms of K or Lil primitives, come home and pour out that line or two and see it often work how I was thinking about it. I like to go on long walks to think about programming in general. But I’d never been able to be as precise, to carry as many ideas in my mind as when I’ve learned these more expressive languages with more abstract tools. When I have to work in something like C, I miss having these tools. It hurts to write a lot more code when I know there’s a more concise pattern, a simpler decomposition of an idea.

The beautiful thing about APL-family languages is you have these thoughtfully, wonderfully designed building blocks that fit together in so many ways, some useful patterns, sort of tiling space. In many programming languages, you’ll have a sort function as part of a built-in standard library. You’ll have the choice of comparator or choosing between ascending and descending. In APL-family languages it’s normally split into two parts: grades < and > (APL: and ) give a permutation vector sorting a collection ascending or descending and indexing @ () is the other half. You index a list by the grade up (@<) of itself in order to sort it. (a: 1 5 3 7; a@<a returns: 1 3 5 7) By splitting it into 2 pieces, it’s now just as easy to sort one list relative to another list which is quite difficult or clumsy in many other languages. (a @< 4 3 2 1 returns: 7 4 5 1 while a @< 2 2 1 1 returns: 1 5 3 7)

People like Iverson and Whitney put a lot of thought into these beautiful decompositions, in these useful general tools. It’s unfortunate how people think of APLs as regular expressions for lists like it’s this DSL with goofy notation for complicated munging. But when you apply it to broader domains, it’s surprising how many kinds of problems you can solve with those same building blocks and data structures.

I’ve been thinking about unifying paradigms lately. We have operations across whole data structures without looping or e.g. kSQL (related to logic programming), different metaphors for the same data structures and operations. You commented somewhere about tables and the primitives required for them to be a useful data structure for a given problem type. How do you choose what metaphors to use?

The most successful relational programming language in existence is SQL. The dream of the relational family was to separate the logic of exactly what’s happening from the data and the description of what we want. Every programmer feels like they ought to know more Prolog than they do; if you learn it, it’s like tricking a search algorithm into doing other things too. SQL queries have a smoothness to the solution space (ignoring nits in big queries). In K, if you need to do a specific thing like parse some fiddly record format, if you solve the exact problem in front of you, there’s normally some elegant way to do it. But if you change the problem even slightly, the solution will wildly change to something else. It’s nicer if small changes to the constraints/requirements of a problem correspond to a small change to the program that solves it. I would argue small changes to a query require small changes to the SQL code (ignoring SQL-engine dependent issues). It’s like a unified algorithmic framework for sorting, filtering, mapping, set operations etc. The idea of a new control structure unifying operations like is exactly what Lil’s query language is intended to be: unifying searching, mapping, filtering, grouping etc. into queries.

I in no way claim credit for this; Lil’s query language is very similar to kSQL/QSQL (the kdb+ query language) but it’s generalized further. kSQL’s designed to ease simple things for people comfortable with SQL. Then you take the training wheels off and it’s a bunch of triadic or pentadic, complicated functions doing row manipulations of a selection in a more general and abstract way etc. Lil like QSQL is a columnar query language; all subexpressions in a query operate on entire columns at once with conforming primitives.

You can’t avoid Lil’s Q/K/APL stuff in this context. People who’ve wrapped their brains around this write a query which works, then get amazed realizing they can write a query anywhere else too! Lil’s query language rules are less rigid than QSQL; you can repeat clauses or do them in any order. They take a table or list of grouped tables and a column expression, evaluate the column expression in related to the table(s) and return a table or list of tables. A where has a column expression you apply in the context of some tables and received filtered tables as an output. Everything’s a pipeline.

Open-source ks mostly ignore kSQL, what inspired you to go down this path?

When first getting into k, I didn’t recognize the expressive benefits of tables. From other languages, you think of a table as dictionary (or list of) with some extra constraints but it’s both; you can look at it from a vertical or horizontal expression. At work we did a lot of data manipulation. At 1010data, all the infrastructure was in k3. Beyond that, it exposed an ad-hoc query language interface for taking a gigantic data set and doing bulk operations on it before looking at it in granular detail. You could have a billion row table of every receipt from a grocery store and ask the system questions, see the top 10 most expensive line items, what usually gets bought together at the same time… This query language had a compositional approach, starting with a table then banging on it with various operations, filtering it down, merging in another table, computing another column. The step by step process, seeing the intermediate steps, was a rather powerful way to think about transforming data. If you take an SQL expression and know what you’re doing, you can remove clauses and get something similar, but they go together in confusing orders and have surprising consequences. It’s difficult to get a step by step reasoning about an SQL query even if you’re a DB expert.

Another nice thing is how a compositional query language where every prefix of a query is a valid query lets you aggressively cache results. If you grab a table, select a subset, compute a column, but then want to change something midway through the query - all the logical tables which flowed through the prefix of that table are cached and available. That’s how you get an interactive system querying against gigantic data sets across a cluster without giving up and working on a sample.

Others are doing this too. Nushell has been around for a while. Rye and Lil are prominent new examples of tables. There are elements of this in LINQ for C# which lets you query iterables with SQL verbs. LINQ’s designed around simpler data structures, but I believe you could make a table data structure and overload it so LINQ would operate on it all at once.

In APL, we learn the principle of operating on entire data structures at once rather than piece by piece. Even in functional programming, you set up a pipeline but you’re conceptually thinking of and implementing operations on the individual pieces. (That has its own advantages, if you want to demand-driven execute something without computing the whole infinite set first.)

I’ve written so many practical programs in JS or what have you, where I’m operating on what’s conceptually a table but expressing it as a sequence of dictionaries with a uniform structure. Think of how many times you’ve done this, how many APIs work like this! If you want to manipulate them, it’s normally a column-wise operation based on a subset of fields. Maybe you want to reorder them but you obviously don’t want to only reorder the prices in a list of sales orders, you want to reorder all the columns in the same way at the same time!

When you equip yourself with the idea of a rectangular table as a tool of modeling the world, you’ll see it in a lot of places. When you model the world this way, you’ll notice relational algebra’s high level operations like left joins are a useful way of expressing complicated algorithms on that data. Without first class tables, you can grasp at it. Most languages with a data frame probably want something more like a first class table. (Different languages and frameworks have varying degrees of generality about this, so I don’t want to sling too many stones.) Many systems have a dataframe but require every column to have the same datatype, which is better than nothing but less general and useful. It’s like a reduce operation, where the left and right operations are the same type letting you do min, max, product etc. But if you’re constrained to something so rigid, you can’t express so many other things. Having records of data which travel together and get manipulated in a uniform way is a useful paradigm. Tables as a first class data structure or at least a convention understood by a large portion of your standard library, will get more adoption over time just as we have seen ideas like map and filter become common, even expected tools.

I don’t think it’s hard to do what Lil does in more experimental languages, but many of Lil’s design decisions specifically allow it to work out as nicely as it does. A column expression in Lil is just a normal expression where there happened to be columns in the table bound as locals. There’s no special magic or case of implicit iteration. In a language without implicit iteration or element-wise equality, you’d have to invent something. As I understand it, R has blocks behaving in a special, lexical way and this mechanism provides a light-weight lambda function to symbolically spread an operation across a whole table. I think Ruby has similar facilities.

How do you approach designing software? You make many applications with Lil and Decker! How intentionally do you delineate what the universe is, what goals you want. Are you able to consider them done or do you randomly come back to add features?

Using the tools I build is the best way I know to improve them. Using Decker to build interactive documentation and example projects keeps me constantly aware of what things are clunky to use, what recurring problems don’t have solutions at hand, and it does often help me discover bugs or design flaws before users are bitten by them. Simple things ought to be easy, so when they end up being difficult in practice that’s a signal. The tools grow to suit the needs of the projects I dream up to build, and the projects my users try to build.

Sometimes those projects take on something of a life of their own. WigglyPaint is a drawing tool I wrote in Decker that managed to become wildly popular with artists in China, and then later the broader web. I see it as something of a “social proof” for Decker as an authoring tool, since the vast majority of the people who have enjoyed WigglyPaint don’t have any idea it’s a Decker tech-demo; they like it on its own merits.

If they were to dig a little deeper, though, they might be surprised to discover that the drawing tool they’re using resides within an entirely different drawing program. Everything is plastic: you can readily alter the color palettes, rearrange tools on the workspace, introduce a new brush shape, or even add complex, entirely new features without restarting anything or switching to a different application. Every exported “deck” comes with all the editing tools, so every user is empowered to take things apart and put them back together as they please. It’s lovely to have a self-contained rapid prototyping environment where everything you might need is close at hand and the whole system is “human-scale”; a person can fit it all in their head at once.

In the right hands, kdb+ is a similar story; it’s a single vertically-integrated technology that can be used to construct all sorts of distributed systems- databases, load-balancers, caches, ticker plants- with a page or so of code. Not just because K is a dense, expressive language, but because everything is made out of modular pieces that all fit together well. You can prototype entire systems without the overhead of selecting, orchestrating, and then interoperating with a bunch of other products with their own APIs and protocols.

Wow! How much people on open-source Ks without these features missing out? J has jd, the foreigns but…

Quite a bit. There are several high-quality FOSS K interpreters available now (ngn/growler, Kona, kyte/i besides my own oK) which are great for learning the language itself, but most of them don’t have the “batteries-included” you’d want to build a practical system, like IPC, or a “K-Tree”, or support for first-class tables and queries. K2 even came with facilities for making data-bound GUI applications, but there’s no equivalent for modern dialects of K. (Unless you count Lil?)

Readers will enjoy your discussion of K versions and history on the ArrayCast.

K itself is implemented simply through its unique paradigm. Alan Kay with STEPS tried to make a simpler OS too. Aaron Hsu believes doing everything through the array languages is optimal. How do we balance different goals of simplicity, of the user being able to write things easily on a certain level of abstraction, of the implementation to iterate more easily towards an optimal system etc.?

The conventional wisdom is usually to start with a very simple language- maybe even just a system of axioms- and build up a stack of progressive abstractions to grow your source language toward a solution for your specific problem. This is both the Lispy way and the Forthy way of doing things, with varying degrees of mechanical sympathy and appetite for encapsulation.

APLs suggest a different approach: write your applications as directly in the language as possible, avoiding the introduction of new abstractions. Represent your data in the datatypes that already exist. All the primitives of the language should fit together in as many useful ways as possible, applicable to as many useful situations as possible. As we tackle new domains with applications, the languages grow, but very slowly and deliberately.

Tersity of notation lets us think and communicate in terms of programming idioms that are tailored to fit each situation, rather than libraries that are reused as sealed units. The abstraction and reuse can live in our heads, instead of on a hard drive. we can write our programs comfortably on paper or a whiteboard rather than needing an editor or IDE to augment us. It’s a very humanist way of programming, I think.

How do you like to approach testing?

The best approach to testing will always depend on the nature of the application being tested; I think many of the very strong opinions about “the right way” to test things you’ll find online are a reflection of the domains the authors have worked in. The best tests are easy to write, easy to maintain, and are highly effective at catching regressions. The worst tests are difficult to write, tedious to maintain, and rarely catch mistakes or provide frequent false-positives.

Interpreters and compilers tend to be a relatively pure function: provide a program fragment, run it, get a specific known result (or a specific error). For this kind of work I like leaning on end-to-end integration tests and avoiding internal test harnesses- makes it easy to compare multiple implementations with a single test suite, too. For something like a webapp frontend or video game, tests are tremendously more complex and tedious to author, and they tend to be quite brittle, prematurely ossifying designs. In Decker I expose public “headless” scripting APIs and use them to test as much of the surface area of the application as I can, but for the uppermost layers of the GUI I find it most practical to rely on manual testing.

In APL-family languages we have building-blocks that often let us express complex algorithms as straight pipelines with no explicit branches or iteration; when I can build up that type of program in a REPL interactively, the only path is the happy path, and the REPL exploration serves as an exhaustive test on its own!