00:12.95 My talk today is Stop Writing Dead Programs.
00:14.57 This is sort of the thesis statement
00:16.01 for the talk, even though it's 40 years
00:18.52 old, this Seymour Papert quote saying
00:19.91 that we're still digging ourselves into
00:21.65 a kind of a pit by continuing to
00:23.68 preserve practices that have no rational
00:26.92 basis beyond being historical.
A strong runner up was this quote, which captures the essence of what we should be trying to do when creating new languages:
“The Self system attempts to integrate intellectual and non-intellectual aspects of programming to create an overall experience. The language semantics, user interface, and implementation each help create this integrated experience.” (source)
00:29.08 I will start with a somewhat personal
00:30.47 journey in technology. I'm going to ask
00:31.66 you for some feedback at some different
00:34.13 places, so first off – by applause – how
00:36.97 many of you know what this is?
00:40.43 Okay, okay, that's actually more
00:41.86 than I expected. Now, how many of you
00:44.33 actually used one of these?
00:47.63 Okay, so what I can say is that I am part
00:49.36 of the last generation of people who
00:51.04 were forced to use punch cards at school.
00:52.79 I still had to write Fortran programs
00:55.25 with punch cards, and this thing is a
00:56.93 card punch. It's like a keyboard,
00:58.85 except when you press the keys you're
01:00.22 actually making holes in a piece of
01:01.54 paper, and then you feed them
01:03.29 into this thing in the back, and the
01:05.57 pieces of paper look like this. So each
01:06.83 one of these vertical columns is
01:08.51 basically a byte, and you're stabbing
01:10.19 through the different bits of the
01:11.81 byte to indicate what letter it is. If
01:13.37 you look at the top left corner,
01:16.55 you see Z(1) = Y + W(1)
.
01:19.19 This is one line of code – a
01:21.53 card is one line of code. Something to
01:23.87 notice about this card it's 80 columns
01:25.55 wide. We're going to come back to that
01:25.56 later.
Some commenters were confused that we still used punched cards in the 80s, when display terminals already existed. This was in the context of a required class for engineering students to prepare them for the possibility that they would encounter punch cards in the wild. Most of us never did, beyond this one class.
01:27.46 This design dates from
01:29.45 1928. This is a Hollerith punch card, the
01:31.42 same one used forever. Now, what does a
01:34.24 program look like if you're programming
01:35.33 like this? It looks like this: it's a deck.
01:37.37 Now notice the rubber band. When you're
01:40.01 doing this, you live in terminal fear
01:41.74 that you will drop the deck of cards. It
01:44.39 is a terrible experience resorting
01:47.21 the cards. That long diagonal stripe
01:48.89 there is so that this person, who made
01:50.51 this particular deck, could put it back
01:51.83 together without having to look at every
01:53.33 single line in the process. And the words
01:55.24 written on the top of the deck are sort
01:57.59 of indicating where different
01:58.49 subroutines are located within this program.
02:00.95 Now, to give you a sense of how long
02:02.99 these programs can get, this picture
02:05.45 (forgive me, it's a low quality picture).
02:06.64 This is the actual reader I used and
02:08.63 that in the front there is an actual
02:10.01 program I wrote. The lower right hand
02:11.80 corner one, which was a Fortran program
02:13.85 to simulate rocket flight, because my my
02:16.72 particular school had a connection to
02:18.22 NASA and we did a lot of Rocket-y things.
02:19.67 Right, so can you imagine how long it
02:22.25 took me to punch all these and put them
02:23.51 in there, and what we would do is give
02:25.25 them to a system operator who would feed
02:26.75 them into a computer. In this case the
02:28.67 computer I personally used was this one.
02:30.53 This is a VAX-11/780. This machine cost
02:34.19 nearly a million dollars, and had 16
02:35.99 megabytes – that's megabytes – of RAM, ran at
02:38.69 5 megahertz – that's megahertz! This
02:41.44 thing in front of me here is thousands
02:43.36 of times more powerful than the machine
02:44.80 that I was using then – that the whole
02:46.13 campus was using to do these kinds of
02:47.57 things then – and what would the output
02:49.49 look like that came from sending this
02:51.35 enormous deck of cards in? Well, it would
02:53.15 come out on a line printer that looks like
02:54.77 this. And you wouldn't get it right
02:56.75 away. An operator would give it to you
02:58.72 later. Note the vintage haircuts, the
03:00.89 fellow in the middle there is the actual
03:02.57 operator who was handing me these
03:03.71 outputs, and he's the person who gave me
03:04.85 these photos of this equipment.
03:06.47 So this process, as you can imagine, was
03:08.57 hard, but it was hard in a dumb way.
03:11.33 Some things are hard because they have
03:13.07 to be, and I really support the idea of
03:14.69 overcoming challenges and doing hard
03:16.43 things, but this was hard
03:18.29 for reasons had nothing to do with the
03:19.79 actual problem you're trying to [solve].
03:22.19 Like, something with a rocket and a simulation,
03:23.57 and you're thinking about not dropping
03:25.13 your punch card deck, and it's taking you
03:26.57 forever to find out what happened. So, it
03:29.14 really hinges on your ability to emulate
03:30.94 the computer in your head because the
03:33.11 computer's not going to help you in any
03:34.30 way. There's not an editor, there's
03:35.44 nothing, and that in turn hinges on
03:38.08 working memory, which is something that
03:39.47 is not very well distributed among
03:41.08 humans. There were a small number of
03:42.94 us for whom this whole thing came pretty
03:44.57 naturally, and we were treated as, like,
03:46.78 special people – as kind of high priests
03:48.58 with magical powers, and this is how we
03:50.69 came to think of ourselves, right, that
03:52.36 we're special [because] we can make it work.
03:53.86 But the truth is we were less priests
03:55.97 like this than we were monks like this –
03:57.41 hitting ourselves in the head.
03:59.93 Right, but the problem is –
04:02.86 as Peter Harkins mentions here – that
04:05.39 programmers have this tendency to, once
04:07.07 they master something hard (often
04:08.86 pointlessly hard), rather than then making
04:10.78 it easy they feel proud of themselves
04:12.58 for having done it and just perpetuate
04:14.33 the hard nonsense. And I'm going to argue
04:16.67 that a lot of what we still do today is
04:18.28 very much like what I was doing on that
04:19.78 old VAX. For one thing, there's a lot of
04:21.59 batch processing going on, and
04:23.74 what's wrong with batch processing? Hella
04:25.73 long feedback loops. It's no good, takes
04:27.77 you forever – it took me 45 minutes to
04:29.27 find out what a one card change would do
04:31.49 in the printout that I would get back,
04:32.99 because that was the loop. You're
04:34.79 thinking: well, it's not like that for us,
04:36.23 right, we're not poking holes in paper
04:37.67 cards – we have display terminals! But
04:40.31 how many of you guys have compile
04:42.05 cycles that can take 45 minutes? Famously,
04:44.87 the Go team wrote go because they
04:47.21 were so angry about waiting for an hour,
04:48.89 because they wanted to see what was
04:50.57 going to happen with some C++
04:51.59 code they're running on some horrible
04:52.85 giant Google codebase. Maybe you want
04:54.89 to deploy your stuff and see if it works,
04:56.27 because we're all running web apps now.
04:57.29 So do you, like, stuff it in a
04:58.90 Docker container, and then ship it out to
05:00.65 the cloud and wait for a CI job? How long
05:02.57 does that that take?
05:04.15 Two hours for this guy! I mean why do we
05:06.77 tolerate this? This is crazy! Docker
05:08.51 shouldn't exist. It exists only because
05:09.95 everything else is so terribly
05:11.15 complicated that they added another
05:12.40 layer of complexity to make it work. It's
05:14.74 like they thought: if deployment is bad,
05:16.12 we should make development bad too. It's
05:18.59 just... it's not good.
05:21.53 So, what kind of things do we inherit
05:23.15 from this way of thinking about the
05:24.59 world? We get funny ideas that are built
05:26.62 into programming about time and state.
05:28.79 Ideas like, there should be a compile/run
05:31.07 cycle. This is a terrible idea, but it's
05:33.23 an ancient idea, that you're going to
05:34.49 compile the thing and you're getting an
05:35.68 artifact and you're going to run the
05:36.71 artifact over there and those two things
05:38.15 are completely different phases of your
05:39.71 process. There's going to be linear
05:41.81 execution – most programming languages
05:43.79 assume that there's only one thread and
05:45.35 you're going to run straight through
05:46.43 from the beginning to the end; that your
05:48.35 program is going to start up from a
05:49.67 blank State and then run to termination.
05:51.40 Now, how many programs that we actually
05:53.45 write do that? We'll revisit that in a
05:55.43 moment. This really only works if your
05:56.99 program is some kind of input/output
05:58.49 transformer. So there's no runtime
06:00.77 introspection, because runtime is
06:02.02 happening over there and your actual
06:03.40 work is happening over here, and you just
06:04.85 have to kind of guess from what happened,
06:05.93 how it might be related to your code, and
06:08.09 if there's a bug – well, sorry, failures
06:10.12 just halt your program. You get maybe a
06:11.68 core dump, or you get a log message
06:13.18 somewhere with a stack trace in it. Now,
06:15.23 what kind of programs do we really write?
06:16.90 Mostly long-lived servers. I've got
06:18.89 server processes with uptimes of a
06:20.81 thousand days.
06:21.89 They don't work the same way
06:23.62 /usr/bin/sort
works. I don't want a
06:25.73 process that's optimized for writing
06:27.35 that. We also write GUI programs. GUI
06:29.51 programs are more intense than this, even.
06:31.30 So you've got all of these
06:32.68 different kinds of input coming into the
06:34.37 program, and it's maybe it's talking to
06:35.68 the keyboard, it's talking to the mouse,
06:36.77 it's talking to the network, if it's Zoom
06:38.57 it's talking to the camera, it's talking
06:40.37 to the microphone – it's crazy. So this
06:42.59 approach to programming just
06:44.15 doesn't work well for the things we
06:45.59 actually build. It also infected
06:48.17 programming language theory. So, if the
06:50.21 program is a static artifact, what does
06:51.59 that mean? It means we're mostly going to
06:53.27 concentrate on algebraics, so we're going
06:54.46 to talk about syntax and semantics and
06:55.79 very little else.
06:57.05 There's going to be no concern really
06:58.49 for pragmatics – and what I mean here by
06:59.80 pragmatics is what it's actually like to
07:01.55 interact with your programming
07:02.74 environment, and this leads to
07:04.37 mathematics envy and a real fixation on
07:06.46 theorem proving.
07:07.96 So, to give an example of what happens
07:09.95 when people actually concentrate on a
07:11.80 part of programming and make progress,
07:13.01 we're going to take a quick tour through
07:14.74 syntax and semantics. We're going to do a
07:17.02 simple transformation here. We've
07:18.40 got 1 through 4, we want it to be
07:19.67 2 through 5. We want it to be
07:21.17 relatively general. I've written some
07:22.90 example programs that do this in a
07:25.37 variety of programming languages. The
07:28.07 first one here is it is in ARM64 machine
07:32.39 language, because my laptop happens to
07:34.18 run this processor now. As you can
07:35.87 plainly see from this code, it starts off
07:38.57 Oh wait! Does everyone here understand
07:40.15 ARM64? Okay, all right, it's a little easier
07:42.46 if I do this, so you can see where the
07:43.79 instructions are within these different
07:45.11 words. This is a cool instruction set.
07:47.39 It's not like x86. [In x86], all the
07:49.18 instructions are different lengths. In
07:50.45 ARM64, they're all the same length because
07:51.77 it's RISC, but we'll do it in assembly
07:54.29 language – it'll be easier, right. So we
07:56.15 we'll start with this label here, add one,
07:57.83 and we've got the signature of what it
07:59.08 would be as a C program after that.
08:00.95 What am I actually doing when I write
08:02.33 this program? Well, the first thing I'm
08:03.58 doing is moving things from registers
08:05.51 onto the stack. Why am I doing this? I'm
08:08.27 doing this because the ABI says I have
08:09.77 to. No other reason. It's nothing to do
08:11.62 with my problem. And then I want to call
08:13.37 malloc
because I have to allocate some
08:14.74 memory to return the new, you know, array,
08:16.90 with the new stuff in it. So what I have
08:18.52 to do...
08:19.30 I'm doing crazy things. Look down here,
08:21.89 you see the registers are all called
08:23.45 with X names? That's because
08:24.83 there's 64-bit registers at X, but I get
08:26.51 down here to set up for malloc
and now
08:27.71 I'm using W names. Why? Well, I just have
08:29.51 to know that I have to do something
08:30.46 special if it's a 32-bit number, and
08:32.44 it'll mask off 32 of the bits and still
08:34.31 work great. Now I have to stuff things
08:37.07 in these registers. I have to multiply
08:38.57 one of the variables. Do I use a multiply
08:39.94 for that? No, I'm using it with a bit
08:41.38 shifting operation because that's what's
08:42.82 faster on this processor. And then I call
08:44.87 malloc
, and I get back what I want. Great.
08:46.91 Now, I want a loop. This is what a loop
08:48.76 looks like. Notice we're on the second
08:49.91 page, and all I'm doing is incrementing
08:52.25 some numbers. So, I come through and
08:54.23 I do a comparison. Okay, is this register
08:55.91 that I put this value into zero? If
08:57.71 it's less/equal, then I jump to return. You
08:59.15 can't see return, it's on another page.
09:00.47 There's a third page.
09:02.09 So, I move zero into this other register
09:04.25 and I go through here and bang bang... I'm
09:06.23 I'm not going to bore you with the whole
09:07.67 thing. I'm bored just talking about it.
09:09.35 Imagine how I felt writing it!
09:11.15 And then at the end I have to do the
09:12.53 reverse of the things I did at the
09:13.79 beginning to set everything back into
09:15.59 the registers from the stack where I
09:16.85 saved them. Why? Because I have to have
09:18.23 the right return address to give this
09:19.61 thing back. I have to do this like a
09:20.99 voodoo incantation, because it's what the
09:22.55 processor wants. Nothing to do with the
09:24.41 problem I'm trying to solve. How can we
09:26.09 do it better? Hey look – it's C)! This is
09:28.25 exactly the same program. Many fewer
09:30.23 lines of code. However, it has a load of
09:32.50 problems that have nothing to do with
09:33.71 what I'm trying to accomplish as well.
09:34.91 For one, I have to pass two things. I have
09:37.31 to pass the length of the array separate
09:38.99 from the array. Why? Because there's no
09:41.38 sequence type in C. Great work guys! 😆 So
09:43.67 then, from there, I want to return this
09:45.23 value. This modified sequence. And what do
09:46.79 I have to do? Well, I had to do this in
09:48.41 assembly too, but this is crazy. I have to
09:49.85 allocate memory give it back and then
09:51.29 hope that the other guy is going to free
09:52.67 that memory later. This has nothing to do
09:54.88 with what I'm trying to accomplish. I
09:56.63 want to increment each of these numbers.
09:57.71 I do it with a for
loop that counts from
09:59.81 one to the length of the array. Is
10:01.43 counting to the length of the array
10:02.38 relevant? Right, no. No, this is not
10:04.37 relevant. In fact, essentially one line
10:07.31 of code of this whole thing – the actual
10:08.99 increment – is the only thing that
10:10.49 actually matters. On the other hand, I can
10:13.00 complement C as a portable assembly
10:14.93 language because you see I don't have to
10:15.88 do the stack nonsense by hand, and
10:17.75 instead of telling it that it's four
10:19.85 bytes wide, I can actually use sizeof
10:21.76 to know that but that's about the
10:23.44 only way it's really an improvement. Now
10:25.19 let's look at Lisp. Note that Lisp is
10:26.75 about 10 years older than C. Here I have
10:29.32 a sequence abstraction. I have four
10:31.07 numbers and I can use a
10:32.57 higher order function to go over it and add
10:33.94 one to each of them. This is a tremendous
10:35.38 improvement by going back in time.
10:38.16 But we can do better. We can do better
10:38.87 than this notation. We can go to
10:41.09 Haskell. So, in Haskell what do we have?
10:43.07 This is really lovely. We have this thing
10:45.11 where we auto-curry the (+ 1)
, and we
10:47.50 get a function that adds one. This is
10:49.25 getting pretty concise. Can anybody here
10:50.81 quickly name for me a language in which
10:52.37 this exact operation is even more
10:54.29 concise? I'll give you a moment.
10:56.03 I hear APL, and indeed APL! So here we
11:00.94 have [rank] polymorphism. I have
11:04.06 a single number –
11:06.41 a scalar – and I have a set of numbers.
11:07.67 Note that there's no stupid junk. I don't
11:09.35 have to put commas between everything. I
11:11.38 don't have to wrap anything in any
11:12.76 special [delimiters] or anything of this
11:14.09 nature. I just say add one to these
11:15.88 numbers, and I get what I was after. So if
11:17.44 we start from the assembly language and
11:20.38 we come to the APL, which is – you know –
11:21.65 again – you know – like eight years older
11:23.44 than C, we find that syntax and semantics
11:25.61 can take us a long way.
11:27.88 But there are other things that we care
11:30.05 about where no one has put in this much
11:32.03 effort. And one of those things is state
11:34.06 and time. Almost every programming
11:35.44 language doesn't do anything to help us
11:37.85 with managing state over time from
11:40.37 multiple sources. There are some notable
11:42.41 exceptions. I will talk about them now. So,
11:46.56 because Rich Hickey – he really cared about
11:47.20 concurrency – he included immutable data
11:47.21 structures. So now you don't have
11:49.06 constant banging on the same things and
11:50.87 crushing each other's data. This is very
11:53.32 helpful. What else? He's got atom
s. These
11:54.71 are synchronized mutable boxes with
11:56.81 functional update semantics. Everybody
11:58.25 uses these. These are great. He has also a
11:59.87 full Software Transactional Memory
12:01.06 implementation that frankly nobody uses,
12:02.63 but it's still great. It just has a more
12:05.50 complicated API, and the lesson from this
12:07.06 probably is: if you want people to do the
12:08.81 right thing, you have to give them an API
12:10.73 simple enough that they really will.
12:12.29 Then on top of this, we have core.async
.
12:13.49 Now, I have less nice things to
12:14.63 say about core.async
. I like
12:16.31 Communicating Sequential Processes, the
12:18.94 way everybody else does, but this is
12:21.23 implemented as a macro and as a
12:22.31 consequence when it compiles your CSP
12:23.93 code you end up with something that you
12:25.31 can't really look into anymore. Like, you
12:27.76 can't ask a channel how many things are
12:30.41 in that channel. You can't really know
12:32.15 much about what's happening there. And I
12:33.94 would say that in the JVM, I agree with
12:35.26 what rich said the year before he
12:36.65 created core.async
, which is that you
12:37.91 should just probably use the built-in
12:39.41 concurrent queues.
12:41.21 Now, in ClojureScript, of course,
12:43.31 these things were more useful because
12:44.87 everyone was trapped in callback hell.
12:46.43 We'll see what happens moving on, now
12:48.88 that we have async
/await
in JavaScript.
12:49.97 Moving on to another implementation
12:51.41 of CSP, Go. Go actually did something good
12:53.38 here, right, they – and I'm not going to say
12:55.43 much else that's great about Go – is
The Go team includes several all-time great programmers. I respect them all. But I do feel that they had a chance to be more ambitious than they were with Go, which – with the weight of their reputations and the might of Google behind it – could have shifted the culture in a better direction.
12:59.62 they built a fantastic runtime for
13:01.91 this stuff. It's really lightweight, it
13:04.06 does a great job. The bad news is that Go
13:04.07 is a completely static language, so even
13:04.85 though you should be able to go in and
13:07.12 ask all of these questions during
13:09.29 runtime while you're developing from
13:11.26 within your editor, like a civilized
13:13.43 person, you can't. You end up with a
13:14.87 static artifact. Well, that's a bummer.
13:15.88 Okay.
13:17.26 And I would say, actually, before I
13:18.71 move on, that anytime you have this
13:20.56 kind of abstraction where you have a
13:22.31 bunch of threads running, when you have
13:22.32 processes doing things, you really want
13:23.21 ps
and you really want kill
. And,
13:24.82 unfortunately, neither Go nor Clojure can
13:26.44 provide these because their runtimes
13:27.41 don't believe in them. The JVM
13:28.49 runtime itself thinks that if you kill a
13:30.59 [thread] you're going to leak some
13:32.56 resources, and that the resources you
13:34.49 leak may include locks that you need to
13:35.87 free up some other threads that are
13:38.03 running elsewhere, so they've just
13:40.37 forbidden the whole thing. And in Go you
13:41.21 have to send it a message, open a
13:42.76 separate Channel, blah blah blah.
13:44.50 Erlang, on the other hand, gets almost
13:46.43 everything right in this area. In
13:47.56 this situation, they've implemented the
13:49.31 actor model, and they've done it in a way
13:50.69 where you have a live interactive
13:52.37 runtime, and because they're using shared
13:54.11 nothing for their state and supervision
13:55.55 trees, you can kill anything anytime and
13:57.05 your system will just keep running. This
13:58.55 is fantastic. This is great. Why doesn't
13:59.93 everything work like this? It also
14:02.03 comes with introspection tools, like
14:03.65 Observer, that should make anyone using
14:05.21 any other platform to build a
14:06.88 long-running server thing fairly jealous.
14:08.32 Now, when I say this, I'm not telling you
14:10.12 you should use Erlang. What I'm telling
14:11.32 you is whatever you use should be at
14:13.31 least as good as Erlang at doing this,
14:14.50 and if you're developing a new language –
14:16.19 for God's sake – please take notice.
14:18.23 I can talk now about something that
14:19.91 I worked on with my colleague Matt
14:21.76 Huebert. This is something that I
14:23.21 particularly like. This is a hack in
The cells project was Matt's baby. He did almost all the coding. I worked with him as a mentor because I had already implemented a number of dataflow systems.
14:24.94 ClojureScript. We call it cells, and it
14:28.19 takes spreadsheet like dataflow and adds
14:30.35 it into ClojureScript. This resulted in a
14:31.61 paper that was delivered at the PX16
14:33.05 workshop at ECOOP in Rome in 2016.
14:35.09 You've got things like this, right. So, you
14:38.93 say: here's an interval, every 300
14:41.15 milliseconds give me another random
14:43.79 integer, and it does. And then you can
14:45.47 have another thing refer to that, in this
14:48.23 case cons
ing them on, and now we build a
14:49.97 history of all the random integers that
14:51.29 have happened. What else can you do? Well
14:52.79 you can refer to that, and you can (take 10)
14:54.76 with normal Clojure semantics, and
14:56.21 then map
that out as a bar chart. What do you
14:57.71 get? A nice graph. A graph that
14:58.97 moves in real time. Or we can move on to
15:00.53 this. We added sort of Bret Victor-style
15:02.03 scrubbers into it so that you could do
15:03.88 these kinds of things. I'll show you
15:05.93 instead of telling you, because it's
15:08.09 obvious if you look at it what's going
15:10.12 on here. We did this partially to
15:12.17 show people that you can just really
15:14.50 program with systems that have all those
15:15.71 features that Bret was demoing.
15:16.73 Source code's still out there – anybody
15:17.99 wants to do that, you can do that. We
15:20.32 moved on from that to maria.cloud, which
Maria was a joint project of Matt, Dave Liepmann, and myself. We wanted a good teaching environment that requires no installfest for ClojureBridge.
15:21.47 takes all of that code we wrote for
15:23.32 cells and turns it into a notebook. We
15:25.85 actually did this for learners. Take a
15:27.41 look at this. This is a computational
15:29.38 notebook. It has the cells, it gives you
15:31.25 much better error messages than default
15:32.44 Clojure, and so on. We used this to teach.
15:34.06 It was a great experience, and currently –
15:36.41 this year – thanks to Clojurists Together, we
15:38.38 have some additional funding to bring it
15:41.15 up to date and keep it running. I
15:42.53 encourage everybody to check it out. The
15:44.09 last thing here on this list is the
15:46.43 propagators. The propagators come
15:48.23 from Sussman – this is Sussman's
15:49.49 project from around the same time that
15:50.93 actors were happening and Alan Kay was first
15:53.26 getting interested in Smalltalk. This
15:54.65 was a really fertile scene at MIT in the
15:56.21 early 70s. It was actually the project
15:58.61 he originally hired Richard Stallman, of
16:00.47 the GNU project, as a grad student, to
16:02.44 work on, and then later did some
16:03.94 additional work with Alexey Radul, which
16:05.56 expanded the whole thing.
16:07.67 I can't tell you all about it here.
16:09.47 There's just too much to say, but I can
16:11.09 tell you there was a fantastic talk at
16:13.12 the 2011 strange Loop called We Really
16:14.99 Don't Know How to Compute!, and I
16:16.67 recommend that you watch that when you
16:18.82 get out of Strange Loop. Just go home and
16:20.56 watch that talk. It's amazing. A side
16:22.00 thing is that the propagator model was
16:23.93 used by one of the grad students at MIT
16:25.18 at the time to make the very first
16:26.38 spreadsheet. VisiCalc was based on this
16:28.18 model. This is a really useful
16:30.11 abstraction that everyone should know
16:32.56 about. It's data flow based, it does truth
16:33.94 maintenance, and it keeps provenance of
16:35.21 where all of the conclusions the truth
16:37.55 maintenance system came from, which means
16:39.91 it's probably going to be very valuable
16:41.74 for explainable AI later.
There are a number of other approaches I really like, but which I didn't have time to get into here. FrTime, from the Racket community, is great. In terms of formalisms for reasoning about this sort of thing, I really like the Π-calculus.
16:44.03 We'll move to another area where
16:46.37 there's been even less progress.
16:47.87 Now we're getting to the the absolute
16:49.31 nadir of progress here, [and] that's in
16:50.50 program representation. Let's look at
16:52.55 that punch card again:
16:54.35 80 columns, there it is.
16:57.11 Now look at this. This is the output of a
16:58.43 teletype. Notice that it is fixed width
16:59.62 and approximately 80 columns. Notice that
17:00.94 the fonts are all fixed with.
17:02.38 This is the teletype in question.
17:04.85 This looks like it should be in a museum,
17:06.82 and it should be in a museum, and – in fact –
17:09.11 is in a museum.
17:11.87 We got these. So, this is the terminal in
17:13.54 which I did a lot of hacking on that VAX
17:16.42 that you saw earlier (when I wasn't
17:17.75 forced to use punch cards), and a lot of
17:19.37 that was in languages like VAX Pascal) –
17:22.54 yeah – but also Bliss, which was pretty
17:25.42 cool. So you'll notice that this is a
17:26.99 VT100 terminal. And all of you are
17:28.01 using machines today that have terminal
17:29.45 emulators that pretend to be this
17:32.39 terminal; that's why they have VT100
17:34.49 escape codes, because those escape codes
17:36.28 first shipped on this terminal. Now we'll
17:39.35 move on to another terminal. This is the
17:41.02 one that I used when I was doing all of
17:43.07 my early Unix hacking back in the 80s.
17:45.35 This is called an ADM-3A. Now, by
17:46.97 applause, how many of you use an editor
17:49.66 that has vi key bindings? Come on! Yeah,
17:51.52 all right, yeah. So then you might be
17:52.90 interested in the keyboard of the ADM-3A,
17:55.07 which was the one that Bill Joy had at
17:58.31 home to connect to school through a
18:00.11 modem while he was writing vi. So here it
18:03.52 is. Note the arrow keys on the h-j-k-l.
18:05.51 They are there because those are the
18:07.07 ASCII control codes to move the roller and the
18:08.69 printhead on the old teletype that you
18:10.61 saw a moment ago. So you'd hit control
18:12.89 plus those to control a teletype.
We used to use CTRL-h to back up over printed characters to then type a sequence of dashes as a strikethrough on these old printers. We also used the same trick on display terminals to make fancy spinning cursors.
18:17.15 It happened to have the arrow keys, he
18:18.95 used them. Look where the control key is.
18:21.89 For all you Unix people, it's right next
18:23.93 to the a. To this day, on this
18:25.66 supercomputer here, I bind the caps lock
18:27.77 key to control because it makes my life
18:29.63 easier on the Unix machine that it is.
18:31.31 Look up there, where the escape key is, by
18:32.99 the q. That's why we use escape to get
18:35.15 into command mode in vi, because it was
18:37.31 easily accessible. Now scan across the
18:38.81 top row just right of the 0. What's
18:41.45 that? The unshifted * is the :.
18:43.37 That's why [it does] what it does in vi,
18:45.40 because it was right there. And now the
18:47.27 last one, for all the Unix people in the
18:49.31 audience, in the upper right hand corner
18:51.28 there's a button where when you hit
18:53.81 control and that button, it would clear
18:55.97 the screen and take the cursor to the
18:58.07 home position. If you did not hit control,
18:59.33 instead hit shift, you got the ~.
19:00.52 Notice tilde is right under home. If
19:01.54 you're wondering why your home directory
19:02.93 is tilde whatever username, it's
19:04.31 because of this keyboard.
19:07.97 Now here is Terminal.app on my mega
19:09.71 supercomputer. Notice 80 Columns of fixed
19:11.21 width type. Notice that when I look at
19:13.19 the processes they have ttys – that stands
19:14.99 for teletype.
19:15.00 This machine is cosplaying as a PDP-11.
19:19.54 Now, whenever I get exercised about this,
19:23.45 and talk about it, somebody sends me this
19:25.78 blog post from Graydon Hoare. He's
19:27.77 talking [about how] he'll bet on text. He
19:29.57 makes good arguments. I love text. I use
19:35.73 text every day. Text is good! The thing
19:35.74 about it, though, is that the people who
19:42.40 send me this in support of text always
19:45.11 mean text like this – text like it came
19:46.54 out of a teletype – and never text like
19:48.04 Newton's Principia, never text like this
19:50.39 from Wolfgang Weingart. That is, these
19:52.43 people don't even know what text is
19:55.25 capable of! They're denying the
19:56.93 possibilities of the medium!
19:58.90 This is how I feel about that. I've
20:01.01 promised Alex I will not say anything
20:02.69 profane during this talk, so you will be
20:05.27 seeing this 💩 emoji again.
20:07.66 The reason I disagree with this position
20:08.87 is because the visual cortex exists, okay?
20:10.49 So this guy, this adorable little
20:12.65 fella, he branched off from our lineage
20:14.69 about 60 million years ago. Note the
20:16.66 little touchy fingers and the giant eyes,
20:18.47 just like we have. We've had a long time
20:21.28 with the visual cortex. It is very
20:23.09 powerful. It is like a GPU accelerated
20:25.90 supercomputer of the brain, whereas the
20:28.37 part that takes in the words is like a
20:30.11 very serial, slow, single-thread CPU, and I
20:32.63 will give you all a demonstration right
20:34.07 now.
20:36.65 Take a look at this teletype-compatible
20:38.87 text of this data and tell me if any
20:42.47 sort of pattern emerges. Do you see
20:44.57 anything interesting?
20:46.31 Here it is plotted X/Y. Your brain knew
20:48.47 this was a dinosaur before you knew that
20:50.27 your brain knew this was a dinosaur.
This dataset is Alberto Cairo's Datasaurus.
20:50.28 That is how powerful the visual
20:51.11 cortex is, and there are loads of people
20:53.81 who have spent literally hundreds of
20:56.45 years getting very good at this. Data
20:58.66 visualization. If I gave you a table
20:59.99 talking about the troop strength of
21:03.35 Napoleon's March to and from Moscow,
21:05.09 you'd get kind of a picture. But if you
21:06.49 look at it like this, you know what kind
21:09.28 of tragedy it was. You can see right away.
21:11.39 This was 175 years ago, and we're still
21:12.77 doing paper tape.
21:14.57 Graphic designers – they know something.
21:16.31 They know a few things. For instance, they
21:18.52 know that these are all channels. These
21:20.45 different things: point, line, plane,
21:22.61 organization, asymmetry – that these things
21:24.11 are all channels that get directly to
21:25.73 our brain, and there is no need to eshew
21:28.54 these forms of representation when we're
21:31.78 talking about program representation.
21:33.71 I recommend everyone in this audience
21:35.69 who hasn't already done so, go just get
21:37.25 this 100 year old book from Kandinsky
21:38.69 and get a sense of what's possible.
21:40.61 Here's one of his students working on
21:42.59 some notation. Look how cool that is! Come
21:44.93 on! All right, so another thing with text
21:47.14 is that it's really bad at doing graphs
21:49.31 with cycles, and our world is full of
21:50.57 graphs with cycles. Here's a Clojure
21:52.13 notation idea of the the taxonomy of
21:54.35 animals, including us and that cute little
21:56.02 tarsier. And it works fine because
21:57.35 it's a tree, and trees are really good at
21:59.81 containment – they can do containment in a
22:02.45 single acyclic manner. Now this
22:04.07 sucks to write down as text. This is the
22:05.57 Krebs cycle. Hopefully, all of you learned
22:07.43 this at school. If not maybe read up on
22:09.52 it.
22:10.90 If you imagine trying to explain this
22:13.61 with paragraphs of text you would never
22:15.40 get anywhere. Our doctors would all fail.
22:17.75 We would all be dead. So instead, we draw
22:21.11 a picture. We should be able to draw
22:23.63 pictures when we're coding as well.
22:25.07 Here's the Periodic Table of the Elements. Look how
22:27.28 beautiful this is. This is 1976. We've
22:28.97 got all these channels working together to
22:30.59 tell us things about all these these
22:32.02 different elements, how these elements interact
22:33.59 with each other.
22:35.51 Another area that we've pretty much
22:36.71 ignored is pragmatics, and what I mean by
22:37.97 that – I'm borrowing it from linguistics
22:39.77 because we've borrowed syntax and
22:41.93 semantics from linguistics – pragmatics is
22:43.49 studying the relationship between a
22:45.35 language and the users of the language,
22:47.51 and I'm using it here to talk about
22:48.52 programming environments.
22:49.90 Specifically, I want to talk about
22:51.64 interactive programming, which is I think
22:53.14 the only kind of programming we should
22:55.01 really be doing. Some people call it live
22:56.57 coding, mainly in the art community, and
22:58.43 this is when you code with what Dan
22:59.63 Ingalls refers to as liveness. It is the
23:01.13 opposite of batch processing. Instead,
23:02.93 there is a programming environment,
23:04.73 and the environment and the program are
23:06.28 combined during development. So what does
23:07.61 this do for us? Well, there's no compile
23:09.52 and run cycle. You're compiling inside
23:11.93 your running program, so you no longer
23:13.31 have that feedback loop. It
23:16.37 doesn't start with a blank slate and run
23:19.07 to termination. Instead, all of your
23:20.57 program state is still there while
23:22.01 you're working on it. This means that you
23:24.04 can debug. You can add things to it. You
23:26.99 can find out what's going on, all while
23:29.14 your program is running.
23:31.07 Of course, there's runtime introspection
23:33.23 and failures don't halt the
23:34.61 program. They give you some kind of
23:36.16 option to maybe fix and continue what's
23:37.73 happening now. This combination of
23:39.16 attributes, I would say, is most of what
23:41.21 makes spreadsheets so productive.
23:43.49 And it gives you these incredibly short
23:44.87 feedback loops, of which we'll now have
23:46.90 some examples. If you're compiling some
23:48.95 code, say, in Common Lisp, you can compile
23:50.51 the code and disassemble it and see
23:52.37 exactly what you got. Now the program is
23:54.23 running. The program is alive right now,
23:56.09 and I'm asking questions of that runtime.
23:58.49 And I look at this and I say, okay,
23:59.81 36 bytes – that's too much – so I'll
24:01.66 go through and I'll add some some, you
24:03.47 know, optimizations to it, recompile,
24:06.23 16 bytes that's about as many
24:07.43 instructions as I want to spend on this.
24:10.78 so I know a bunch of you are probably
24:12.89 allergic to S-expressions. Here's Julia.
24:14.45 You can do exactly the same thing in
24:17.51 Julia. Look at this. You get the native
24:19.07 code back for the thing that you just
24:20.87 made, and you can change it while it's
24:22.43 running.
A lesser form of livecoding is embodied in PHP. We could spend an hour discussing all the weird, inconsistent things about that language, but I'd argue that the short feedback loops it provides are why so much of the internet still runs on it today (Facebook, Wikipedia, Tumblr, Slack, Etsy, WordPress, &c).
24:24.59 Now what about types? This is where half
24:24.60 of you storm off in anger. So,
24:25.90 I'm going to show you this tweet, and I
24:28.13 wouldn't be quite this uncharitable, but
24:29.93 I broadly agree with this position.
24:31.19 It's a lot of fun like. I have
24:32.81 been programming for 45 years. I have
24:34.01 shipped OCaml. I have shipped Haskell. I
24:35.39 love Haskell, actually. I think it's great.
24:35.40 But I would say that over those many
24:37.66 decades, I have not really seen the
24:39.83 programs in these languages to have any
24:42.40 fewer defects than programs in any other
24:43.90 programming language that I use, modulo
24:46.01 the ones with really bad memory
24:48.77 allocation behavior.
24:51.28 And there has been considerable
24:52.97 empirical study of this question, and
24:55.31 there has been no evidence. It really
I was going to do a little literature review here to show that development speed claims for dynamic languages and code quality/maintenance claims for static languages appear to have no empirical evidence, but Dan Luu has already done a great job of that, so I'll just link to his page on the topic:
“[U]nder the specific set of circumstances described in the studies, any effect, if it exists at all, is small. [...] If the strongest statement you can make for your position is that there's no empirical evidence against the position, that's not much of a position.”
24:56.87 doesn't seem to matter. So if you like
24:59.14 programming in those languages, that's
25:01.90 great! I encourage you to do it! You
25:03.35 should program it whatever you enjoy, but
25:05.02 you shouldn't pretend that you have a
25:07.01 moral high ground because you've chosen
25:08.75 this particular language. And I would say
25:10.31 really that if what you care about is
25:12.16 systems that are highly fault tolerant,
25:14.45 you should be using something like
25:16.37 Erlang over something like Haskell
25:18.47 because the facilities Erlang provides
25:19.78 are more likely to give you working
25:21.04 programs.
Imagine that you were about to take a transatlantic flight. If some engineers from the company that built the aircraft told you that they had not tested the engines, but had proven them correct by construction, would you board the plane? I most certainly would not. Real engineering involves testing the components of a system and using them within their tolerances, along with backup systems in case of failure. Erlang's supervision trees resemble what we would do for critical systems in other engineering disciplines.
None of this is to say that static types are bad, or useless, or anything like that. The point is that they, like everything else, have limitations. If I'd had more time, I would have talked about how gradual typing (e.g. Typed Racket, TypeScript, &c) is likely an important part of future languages, because that approach allows you to defer your proofs until they can pay for themselves.
25:22.85 You can throw fruit at me – rotten fruit –
25:24.47 at me later. You can find me in the
25:26.45 hallway track to tell me how wrong I am.
25:28.43 So, I've said that, but I'll also
25:30.23 show you probably the most beautiful
25:32.51 piece of [code] that I've ever seen.
25:33.64 Like, the best source code in the world.
25:35.33 And that's McIlroy's Power Serious, which
25:37.19 happens to be written in Haskell. So, this
25:38.75 is a mutually recursive definition of
25:38.76 the series of sine and cosine in two
25:40.13 lines of code. I want to cry when I look
25:42.11 at this because of how beautiful it is.
25:43.25 But that has nothing to do with software
25:44.63 engineering. Do you understand what I'm
25:47.33 saying? There's a different question. The
25:48.89 beauty of the language is not always
25:50.87 what gets you to where you need to go.
25:52.13 I will make a an exception here for
25:53.87 model checkers, because
25:55.97 protocols are super hard! It's a good
25:57.52 idea to try to verify them I've used
25:59.02 Coq and Teapot [for example] for these kinds of
26:00.95 things in the past, and some systems do
26:02.14 have such a high cost of failure that it
26:03.64 makes sense to use them. If you're
26:05.14 doing some kind of, you know, horrible
26:07.01 cryptocurrency thing, where you're likely
26:08.75 to lose a billion dollars worth of
26:11.09 SomethingCoin™, then, yeah, you
26:14.39 maybe want to use some kind of verifier
26:15.83 to make sure you're not going to screw
26:17.33 it up. But, that said, space
26:18.71 probes written in Lisp and FORTH)
26:20.21 have been debugged while off world.
Had I had more time, I would have done an entire series of slides on FORTH. It's a tiny language that combines interactive development, expressive metaprogramming, and tremendous machine sympathy. I've shipped embedded systems, bootloaders, and other close-to-the-metal software in FORTH.
26:22.25 If they had if they had proven their
26:23.81 programs correct by construction,
In fact, they did prove their program correct by construction. But there was still human error!
26:25.07 shipped them into space, and then found out
26:26.63 their spec was wrong, they would have
26:28.43 just had some dead junk on Mars. But what
26:30.35 these guys had was the ability to fix
26:33.47 things while they are running on space
26:34.85 probes. I think that's actually more
26:35.93 valuable. Again, throw the rotten fruit
26:37.90 later. Meet me in the hallway track.
26:40.19 I would say overall that part of
26:42.40 this is because programming is actually
26:44.75 a design discipline. It — oh, we're losing
26:46.19 somebody – somebody's leaving now probably
26:47.51 out of anger about static types.
This was an improvised joke about someone leaving to eat lunch or use the bathroom or something. I've since heard that that person felt embarassed and called out by the joke, so I'd like to leave an apology here. It was meant to be funny in context!
26:48.95 As a design discipline, you find that you
26:50.93 will figure out what you're building as
26:52.43 you build it. You don't actually
26:54.83 know when you start, even if you think
26:57.04 you do, so it's important that we build
26:58.90 buggy approximations on the way, and I
27:00.89 think it's not the best use of your time
27:02.93 to prove theorems about code that you're
27:04.31 going to throw away anyway. In addition,
27:07.43 the spec is always wrong! It doesn't
27:08.57 matter where you got it, or who said it,
27:12.76 the only complete spec for any
27:14.87 non-trivial system is the source code of
27:16.25 the system itself. We learn through
27:18.28 iteration, and when the spec's right, it's
27:20.02 still wrong! Because the software will
27:21.71 change tomorrow. All software is
27:24.23 continuous change. The spec today is not
27:25.73 the spec tomorrow. Which leads me to
27:27.23 say that overall, debuggability is in my
27:29.26 opinion more important than correctness
27:32.14 by construction. So let's talk about
27:33.76 debugging!
27:35.51 I would say that actually most
27:37.25 programming is debugging. What do we
27:38.63 spend our time doing these
27:41.02 days? Well, we're spending a lot of time
27:43.25 with other people's libraries. We're
27:44.87 dealing with API endpoints. We're dealing
27:46.97 with huge legacy code bases, and we're
27:48.71 spending all our time like this robot
27:50.93 detective, trying to find out what's
27:52.37 actually happening in the code. And we do
27:54.16 that with exploratory programming,
27:54.17 because it reduces the amount of
27:55.66 suffering involved. So, for example, in a
27:57.64 dead coding language, I will have to run
27:58.90 a separate debugger, load in the program,
28:00.35 and run it, set a break point, and get it
28:01.31 here. Now, if I've had a fault in
28:02.75 production, this is not actually so
28:04.49 helpful to me. Maybe I have a core dump,
28:06.35 and the core dump has some information
28:07.73 that I could use, but it doesn't show me
28:08.99 the state of things while it's running.
28:11.21 Now here's some Common Lisp. Look, I set
28:12.83 this variable. Look, I inspect this
28:14.51 variable on the bottom I see the value
28:16.49 of the variable. This is valuable to me.
28:18.16 I like this, and here we
28:20.33 have a way to look at a whole set of
28:22.49 nested data structures graphically. We
28:24.04 can actually see things – note in
28:25.85 particular the complex double float at
28:27.28 the bottom that shows you a geometric
28:28.37 interpretation.
This object inspector is called Clouseau. You can see a video about it here.
28:30.23 This is amazing! This is also 1980s
28:31.49 technology. You should be ashamed if
28:33.89 you're using a programming language that
28:35.57 doesn't give you this at run time.
28:37.31 Speaking of programming languages that
28:39.64 do give you this at runtime, here is a
28:43.07 modern version in Clojure. Here's somebody
28:45.95 doing a Datalog query and getting back
28:47.93 some information and graphing it as they
28:49.31 go. I will say that Clojure is slightly
28:51.64 less good at this than Common Lisp, at
28:53.02 present, in part because the Common Lisp
28:53.03 Object System (CLOS) makes it particularly easy
28:54.16 to have good presentations for different
28:56.51 kinds of things, but at least it's in the
28:58.19 right direction.
28:59.14 As we talk about this, one of the
29:00.52 things in these kinds of programming
29:02.33 languages, like Lisp, is that you have an
29:04.31 editor and you're evaluating forms – all the
29:06.16 Clojure parameters here are going to
29:08.14 know this right off –
29:10.19 you're evaluating forms and they're
29:12.04 being added to the runtime as you go. And
29:13.43 this is great. It's a fantastic way to
29:14.69 build up a program, but there's a real
29:16.49 problem with it, which is that if you
29:18.35 delete some of that code, the thing
29:19.90 that you just evaluated earlier is still
29:20.93 in the runtime. So it would be great if
29:23.14 there were a way that we could know what
29:24.71 is current rather than having, say, a text
29:26.81 file that grows gradually out of sync
29:28.49 with the running system. And that's
29:29.69 called Smalltalk, and has been around
29:30.88 since at least the 70s. So this is the
29:32.14 Smalltalk object browser. We're
29:34.85 looking at Dijkstra's algorithm,
29:36.47 specifically we're looking at
29:37.78 backtracking in the shortest path
29:39.11 algorithm, and if I change this I know I
29:41.26 changed it. I know what's happening if I
29:43.07 delete this method the method is gone.
29:44.63 It's no longer visible. So there is a
29:45.95 direct correspondence between what I'm
29:48.40 doing and what the system knows and
29:50.57 what I'm seeing in front of me, and
29:52.01 this is very powerful. And here we have
29:53.87 the Glamorous toolkit. This is
29:56.38 Tudor Gîrba and feenk's thing. They embrace this
29:57.83 philosophy completely. They have built an
29:58.90 enormous suite of visualizations that
29:59.99 allow you to find out things about your
30:01.85 program while it's running. We should all
30:04.85 take inspiration from this. This is an
30:06.35 ancient tradition, and they have kind of
30:07.90 taken this old thing of Smalltalkers
30:10.37 and Lispers building their own tools as
30:12.40 they go to understand their own codebases,
30:13.97 and they have sort of pushed it –
30:15.16 they've pushed the pedal all the way to
30:17.63 the floor, and they're rushing forward
30:19.90 into the future and we should follow
30:21.76 them.
30:23.81 Another thing that is very useful in
30:25.19 these situations is error handling. If
30:26.81 your error handling is 'the program stops',
30:29.02 then it's pretty hard to recover.
30:31.07 But in a Common Lisp program like this –
30:33.04 this is an incredibly stupid toy example –
30:34.85 but I have a version function. I have not
30:36.04 actually evaluated the function yet. I'm
30:37.54 going to try to call it. So, what's going
30:39.28 to happen, well, the CL people here know
30:40.73 what's going to happen, it's going to pop
30:41.81 up the condition handler. So this is
30:43.07 something that – programming in Clojure –
30:43.08 I actually really miss from Common Lisp.
30:43.90 It comes up, and I have options here. I
30:45.88 can type in the value of a specific
30:47.51 function, say 'hey call this one instead'
30:49.13 for the missing function. I can try again,
30:50.99 which – if I don't change anything – will
30:52.90 just give me the same condition handler.
30:54.64 Or, I can change the state of the running
30:57.04 image and then try again. So, for example,
30:58.54 if I go down and evaluate the function
31:00.16 so that it's now defined and hit retry,
31:01.90 it just works. This is pretty amazing. We
31:02.81 should all expect this from our
31:05.09 programming environments. Again, when I
31:06.23 talk about Smalltalk and Lisp, people say
31:07.61 'well, I don't want to use Smalltalk or Lisp'. I'm
31:09.64 not telling you to use Smalltalk or
31:11.45 Lisp. I'm telling you that you should have
31:13.01 programming languages that are at least
31:14.99 as good as Smalltalk and Lisp.
31:16.37 Some people, when I show them all this
31:17.81 stuff – all this interactive stuff, they're,
31:19.61 like, 'Well, what if I just had a real fast
31:21.71 compiler, man? You know I can just
31:23.45 just change and hit a key and then the
31:25.19 things that –' Well, we're back to that
31:28.01 💩 again, because if you have a fast
31:29.81 compiler you still have all the problems
31:31.97 with the blank slate/run-to-termination
31:33.64 style. Data science workloads
31:35.09 take a long time to initialize. You might
31:36.23 have a big data load and you don't want
31:37.43 to have to do that every single time you
31:38.81 make a change to your code. And the data
31:41.21 science people know this! This is why R
31:42.76 is interactive. This is why we have
31:43.97 notebooks for Python and other languages,
31:45.71 because they know it's crazy to work
31:48.23 this other way. Also, GUI State – oh my word!
31:49.54 It can be incredibly tedious to click
31:50.87 your way back down to some sub-sub-menu
31:53.38 so that you can get to the part where
31:55.13 the problem is. You want to just keep it
31:56.57 right where it is and go in and see
31:58.78 what's happening behind the scenes, and
32:00.35 fix it while it's running.
Someone came up to me after the talk and described a situation where he was working on a big, fancy commercial video game. He had to play the same section of the game for 30 minutes to get back to where the error occurred each time. 😱
32:01.78 Also, you should be able to attach to
32:03.35 long-running servers and debug them
32:05.45 while they're in production. This is
32:07.13 actually good! It's scary to people who
32:08.75 are easily frightened, but it is very
32:10.78 powerful.
32:13.19 I'll say after all of this about
32:16.37 interactive programming, about escaping
32:18.40 batch mode, that almost all programming
32:20.21 today is still batch mode. And how do we
32:21.52 feel about that? I kind of feel like Licklider
32:23.09 did. Licklider funded
32:24.88 almost all of the work that created the
32:26.26 world we live in today, and Engelbart
32:27.95 built half of it, and one of the things
32:29.69 that Licklider said that I found – I
32:31.07 just love the phrase – is 'getting into
32:33.23 position to think'. That is, all of the
32:34.97 ceremony that you have to go through to
32:36.59 get ready to do your work should go away,
32:36.60 and that was their whole mission in the
32:38.99 60s.
32:40.73 We almost got there, but then we have
32:42.40 languages like C++.
32:43.78 I could say a lot of mean things
32:46.31 about C++, but I used to work at the
32:47.57 same facility that Bjarne did, and I kind
32:49.66 of know him a little bit, so I'm not
32:51.23 going to do that. Instead,
32:52.97 I'm just going to quote Ken Thompson
32:56.99 This is a really funny situation,
32:59.21 because I worked [using] some of the early C++
33:01.31 compilers because I was
33:03.28 excited about the idea of having decent
33:05.69 abstractions in a low-level language
33:08.14 that I could use [at work]. But I will say that it
33:08.15 was never great, and that it has gotten
33:09.40 worse over time, paradoxically by adding
33:11.51 good features to the language. But
33:14.26 if you keep adding every feature that
33:16.13 you possibly want, you end up
33:17.69 with a language that is not in any way
33:19.66 principled. There is no way to reason
33:20.99 about it. It has too much junk in it. And
33:22.97 if you'd like to see this happening in
33:26.45 real time to another language, I
33:31.43 recommend that you read what's going on
33:33.40 in TC39 with JavaScript, where they are
33:34.85 adding every possible feature and
33:36.52 muddying an already difficult language
33:38.81 further.
In all fairness, TC39 is in a terrible position. They can't remove features from the language because there's such a large corpus already in the world. At the same time, the language has a bunch of ergonomic problems that they want to fix. I wish they had frozen a primitive version of JS and added a marker at the beginning of scripts to switch out which language is used, much in the way #lang
does in Racket.
33:40.07 So, what about Go? Well, I admire the
33:42.76 runtime and the goroutines, the garbage
33:44.81 collector, but it's really another punch
33:47.21 card compatible compile/run language. It
33:49.25 also shares with C++
33:50.87 the problem that it's not a great
33:52.19 library language, because if you want to
33:53.45 write a library in Go and then use it
33:54.88 from say a C program, or whatever, you
33:57.28 have to bring in the entire go runtime,
33:58.54 which is a couple [of megabytes]. not mostly what I
34:00.04 want. So what about Rust? Well, I mean it's
34:01.61 a nice thing that Rust is a good library
34:03.59 language. I like that about it. But it's
34:04.90 also a huge missed opportunity in terms
34:06.35 of interactive programming. They just
34:06.36 went straight for the punch cards again.
34:07.61 And it's a super super complicated
34:10.01 language, so it would be nice when
34:11.51 trying to figure out which of the 40
34:12.95 different memory allocation keywords
34:15.29 you're going to use to tell it how to do
34:16.60 its thing if you could explore that
34:17.99 interactively instead of going through a
34:19.55 compile/test cycle. And another way that
34:21.40 I feel about it – I have to quote Deech
34:23.38 here – which is that you know some people
34:25.07 hate stop the world GC, I really hate
34:27.23 stop the world type checkers. If
34:29.57 it's going to take me an hour to compile
34:30.95 my thing, I just want to give up. I'm
34:32.81 going to become a carpenter or something.
34:35.03 In this family of languages,
34:36.34 I'll say that Zig is more to my taste. I
34:37.60 actually like Zig more than I like
34:40.12 Rust. This will anger all of the
34:41.99 Rustaceans. I apologize, but it is true.
34:43.31 But, Zig people – for goodness sake – why is
34:45.10 there no interactive story there either?
34:46.66 You've got this nice little language
34:47.93 that has multi-stage compilation. It can
34:49.60 learn a lot from Lisp, and it just sort
34:52.43 of ignores all that and goes straight
34:53.57 to the 1970s or before.
34:55.43 So what do future directions that don't
34:57.34 suck look like? Well, I'll give you some
34:58.73 examples that try to use some
34:59.75 of the things I've talked about as
35:01.55 underexplored areas. So, this
35:02.75 is a structure editor for Racket, which
35:04.84 is a dialect of Scheme), and it was built
35:07.01 by a fellow called Andrew Blinn, and it's
35:08.51 still Racket underneath. That is, it's
35:10.01 still a lot of parentheses – it's still
35:11.99 S-expressions – but when you're editing it,
35:14.56 you have this completely different
35:15.89 feeling where you're modifying this
35:17.39 living structure and it's quite colorful
35:19.13 and beautiful – probably for some of you
35:20.56 garish – but I like it.
35:22.13 And I recommend having a peek at how
35:25.67 that works, and compare it to how you're
35:27.53 editing code now. Another example that I
35:29.27 think will be more accessible to this
35:32.21 audience is this one from Leif Anderson.
35:33.17 This is also Racket, and this is doing a
35:35.27 define using pattern matching for a red
35:37.25 black tree balancing algorithm. And it is
35:39.47 an ancient practice of many years to
35:42.29 document gnarly code like this with a
35:43.67 comment block over it, but you have a
35:46.13 couple of problems: (1) the comment block
35:47.51 is ugly and not completely obviously
35:49.01 meaning what it's supposed to mean; but
35:50.45 also (2) it can grow out of sync with the
35:52.43 code itself so. Leif has made this fine
35:53.81 thing that reads the code and produces
35:55.37 these diagrams, and you can switch the
35:57.82 diagram view on or off. So this is
35:59.45 what – if we want to talk about
36:01.43 self-documenting code, I would say
36:02.56 something like this that can actually
36:04.55 show you what the code does is better
36:07.49 than what most things do.
36:09.10 In the same vein, we've got this piece.
36:11.27 This is called Data Rabbit. Data
36:13.84 Rabbit is a crazy data visualization
36:15.71 thing written in Clojure. Each one of
36:17.08 these little blocks that are connected
36:18.29 by these tubes is actually a little
36:20.51 piece of Clojure code, and they can do
36:22.67 data visualization, they can do
36:24.53 refinement, they can do all of these nice
36:26.21 things. I'm not a huge, you know, box
36:28.19 and arrow programming language guy, but I
36:29.63 think that Ryan has done great work here
36:32.27 and that everybody should take a look at
36:33.13 it.
36:34.91 There's also Clerk. I'm a bit biased
36:36.05 here. This is something I work on. This is
36:38.03 something I've been working on for the
36:39.95 last year with the team at Nextjournal,
36:42.17 but I think it is actually very good, so
36:44.63 I'm going to tell you a little something
36:46.67 about it.
36:48.17 This is this is what it looks like
36:49.43 when you're working with Clerk. You've got
36:51.34 whatever editor you want on one side and
36:52.97 then you've got a view onto the contents
36:54.17 of the namespace you're working on off
36:55.49 to the side. This has some special
36:57.53 properties. It means, for one thing, that
36:59.08 you can put these notebooks into version
37:01.19 control. You can ship these notebooks.
37:02.51 These can be libraries that you use. You
37:02.52 don't have this separation between your
37:03.47 notebook code and your production code.
37:05.45 They can be the same thing, and it
37:06.82 encourages a kind of literate
37:07.84 programming approach where every comment
37:09.17 along the way – or every comment block
37:11.03 along the way – is interpretered as markdown,
37:11.93 with LaTeX and other features.
37:13.01 It's a very nice way to work. I
37:14.99 encourage the Clojure people here to
37:16.06 check it out. It is of no use to you if
37:18.10 you're not a Clojure person, because it's
37:20.15 very Clojure-specific. And I'll show you
37:21.89 a couple of other screenshots here, like
37:23.56 this we're doing some data science and
37:26.08 you've got – that's my emacs on the
37:27.89 right hand side, and I'm able to do all
37:30.23 of the things, like pretty printing data
37:31.97 structures, and inspecting them, and then
37:34.19 sending things over and seeing them in
37:35.63 Clerk. It is a very cozy way to work.
37:37.67 There's also, for instance, this example
37:38.81 where in around six lines of code I do a
37:40.37 query for some bioinformatic information
37:42.29 that shows me
37:43.91 what drugs affect what genes that are
37:45.34 known to be correlated with what
37:47.39 diseases, so we can see what drugs
37:48.77 might be interesting targets for genetic
37:50.08 disorders of differing type. Twenty
37:51.29 years ago, if you would have told people
37:53.56 they'd be able to do a single query like
37:55.73 this and find these kinds of things out
37:57.10 they would have looked at you like had
37:58.79 two heads, but here it is and it's no
38:00.34 code at all. Or this, which is a port of
38:02.15 Sussman's Structure and Interpretation
38:03.71 of Classical Mechanics library into
38:05.27 Clojure that you can use inside of Clerk.
This is very nice work by Sam Ritchie. In addition to porting the libraries, he's working on an open edition of Sussman's textbooks using Clojure.
38:07.73 And then [you can] do things with physics –
38:09.95 real things. This is emulating a chaotic
38:11.87 system, and you can actually – you can't
38:13.43 see on this – but you can actually grab
38:15.77 sliders and move them around and change
38:17.75 the state of the system in real time.
38:18.82 It'll show you what's happening.
38:20.15 Or this. Martin here in the front row
38:23.56 wrote this. This is an example of Rule
38:25.49 30, which is a cellular automaton, and he's
38:26.56 written a viewer for it, so instead of
38:27.89 looking at 1s and 0s, you can
38:28.91 actually see the thing he's working on.
38:29.87 And the amount of code this takes is
38:31.73 almost none.
38:34.55 This is a regular expression dictionary
38:36.10 that I wrote. This thing – one of the
38:38.87 nice things about Clerk is you have all
38:40.67 the groovy visualization [and] interactive
38:42.17 things that come from having a browser,
38:43.91 but you also have all the power of
38:45.41 Clojure running on the JVM on the other
38:46.60 side. So you can do things like talk to a
38:47.93 database on the file system, which is a
38:49.19 revelation compared to what you can
38:51.34 normally do with a browser.
38:53.15 With this kind of thing you
38:55.13 can do rapid application development. You
38:57.17 can do all kinds of things, and I will
38:58.60 add that clerk actually improves on the
38:59.69 execution semantics that you normally
39:00.95 get with emacs and Clojure. This is inside
39:02.32 baseball for the Clojure people, sorry
39:04.49 for everybody else, but that thing I was
39:06.23 talking about – about how you can add
39:08.08 things to the running image and then
39:09.29 delete the code and then they're not
39:11.15 there and you don't know it and maybe
39:12.65 you save your program it doesn't work
39:14.03 the next time you start – Clerk will not
39:15.34 use things that you've removed from the
39:16.84 file. It actually reports that, so you get
39:18.65 errors when you have gotten your text
39:19.97 out of sync with your running image.
39:21.53 Now, obviously, I have a huge Lisp bias. I
39:23.51 happen to love Lisp, but it's not just
39:25.49 Lisps. There are other people doing good
39:27.89 things. This is called Hazel. This is
39:29.99 from Cyrus Omar's team. You see those
39:31.25 little question marks after the function
39:32.87 there? This is an OCaml or Elm-like
39:34.01 language, and they do something called
39:35.81 typed holes where they're actually
39:37.25 running interactively their type
39:38.75 inference and using it for its in my
39:40.06 opinion strongest purpose, which is
39:41.27 improving user interface. So here, when
39:42.34 you go to put something into one of
39:44.75 these typed holes, it knows what type
39:46.67 it's going to be, and it's going to give
39:48.71 you hints, and it's going to help you do
39:50.39 it, and they've taken that to build this
39:52.25 nice student interface. If you're
39:54.71 going to teach students through design
39:55.73 recipes that involve type-based thinking,
39:56.87 then you should have a thing like this
39:58.31 that actually helps them in some way, and
40:00.41 the one they've made is very good I
40:01.79 recommend reading the papers. [Cyrus] has
40:03.34 a student called David Moon who has made
40:04.84 this. This is called Tylr. I can't really
40:06.41 show you this in a good way without
40:07.97 [many videos]. So I recommend that you go to
40:10.19 David Moon's Twitter, and you scroll
40:11.27 through and you look at some of these
40:13.49 things. It's got a beautiful
40:15.23 structure editing component that
40:16.43 prevents you from screwing up your code
40:17.51 syntactically while you're working on it,
40:18.71 and gives you advice based on type
40:20.39 information.
40:22.13 Here this is my absolute favorite
40:23.56 from Cyrus's group. This is also by
40:25.37 David Moon who did the structure editor
40:26.81 and Andrew Blinn who did the nice editor
40:28.73 for Scheme that we saw at the beginning
40:30.17 of this section. Here we have, again, an
40:32.27 OCaml or Elm-like language, but you can
40:34.13 put these little widgets in.
40:36.65 These are called livelits, with the
40:37.79 syntactical affordance here [that] they
40:39.71 begin with a dollar sign.
40:41.51 He's got some data here, and the data
40:42.58 showed as a data frame. It's
40:43.97 actually a convenient, nice to edit thing,
40:45.41 and it's in-line with the source code.
40:47.56 This is a thing where you can have
40:49.13 more expressive source code by allowing
40:50.99 you to overlay different views onto the
40:51.00 source code. You can also see there's a
40:52.49 slider in there, and the slider is [live].
40:54.23 [It] immediately computes. The rest of the
40:56.08 values are immediately recomputed when
40:57.41 the slider slides in a data flow kind of
40:59.03 way. This is a great project. I hope they
41:00.23 do more of it. Here's something a little
41:02.75 crazier. This is Enso. Enso is groovy
41:05.03 because it is a functional programming
41:07.06 language that has two representations. It
41:09.82 is projectional, so it is not just this
41:11.08 kind of lines between boxes thing.
41:13.55 It's line between boxes, and then you
41:15.10 can flip it over and see the code that
41:18.23 corresponds to those things. You can edit
41:19.79 either side and it fixes both.
41:21.34 And now we'll go on to our last example
41:22.97 from this section, which is also the
41:24.65 craziest one. And that is Hest by Ivan Reese.
41:27.10 Here we're computing factorial,
41:28.37 but we're doing it with animation, so we
41:29.69 see these values flowing through the
41:31.60 system in this way and splitting based
41:33.10 on uh based on criteria that are
41:34.60 specified in the code, and we're working
41:36.53 up to a higher and higher factorial now.
41:38.45 I look at this, and I don't say 'yeah,
41:41.45 that's how I want to program; I
41:42.89 want to spend every day in this thing',
41:44.81 but what I've learned – if nothing else –
41:47.15 over the very long career that I've had,
41:49.67 is if you see something that looks
41:50.99 completely insane and a little bit like
41:52.31 outsider art, you're probably looking at
41:53.87 something that has good ideas. So, whether
41:55.60 or not we ever want to work like this, we
41:58.01 shouldn't ignore it.
41:59.21 This was my last example for today.
I had to stop because I was already slightly over time, but there a number of other systems that I would like to have mentioned:
In this talk, I stayed away from artistic livecoding systems because many programmers can't see themselves in what artists are doing. However, I would be remiss not to show you these systems:
42:01.91 I have some thank yous to do. First,
42:04.31 I'd like to thank Alex for inviting me
42:07.25 to give this talk. I'd like to thank Nextjournal
42:08.69 for sponsoring my work, including
42:10.91 the writing of this talk. And I would
42:13.25 like to thank all of you for watching!
42:15.41 Thank you very much!