Computerworld

The A-Z of Programming Languages: Bourne shell, or sh

An in-depth interview with Steve Bourne, creator of the Bourne shell, or sh
Steve Bourne

Steve Bourne

Computerworld is undertaking a series of investigations into the most widely-used programming languages. Previously we have spoken to Alfred v. Aho of AWK fame, S. Tucker Taft on the Ada 1995 and 2005 revisions, Microsoft about its server-side script engine ASP, Chet Ramey about his experiences maintaining Bash, Bjarne Stroustrup of C++ fame and to Charles H. Moore about the design and development of Forth. We've also had a chat with the irreverent Don Woods about the development and uses of INTERCAL, as well as Stephen C. Johnson on YACC, Luca Cardelli on Modula-3, Walter Bright on D, Simon Peyton-Jones on Haskell and more recently, Larry Wall, creator of the Perl programming language.

On this occasion we speak to Steve Bourne, creator of the Bourne shell, or sh. In the early 1970s Bourne was at the Computer Laboratory in Cambridge, England working on a compiler for ALGOL68 as part of his PhD work in dynamical astronomy. This work paved the way for him to travel to IBM’s T.J. Watson Research Center in New York in 1973, in part to undertake research into compilers. Through this work, and a series of connections and circumstance, Bourne got to know people at Bell Labs who then offered him a job in the Unix group in 1975. It was during this time Bourne developed sh.

What prompted the creation of the Bourne shell?

The original shell wasn’t really a language; it was a recording -- a way of executing a linear sequence of commands from a file, the only control flow primitive being goto a label. These limitations to the original shell that Ken Thompson wrote were significant. You couldn’t, for example, easily use a command script as a filter because the command file itself was the standard input. And in a filter the standard input is what you inherit from your parent process, not the command file.

The original shell was simple but as people started to use Unix for application development and scripting, it was too limited. It didn’t have variables, it didn’t have control flow, and it had very inadequate quoting capabilities.

My own interest, before I went to Bell Labs, was in programming language design and compilers. At Cambridge I had worked on the language ALGOL68 with Mike Guy. A small group of us wrote a compiler for ALGOL68 that we called ALGOL68C. We also made some additions to the language to make it more usable. As an aside we boot strapped the compiler so that it was also written in ALGOL68C.

When I arrived at Bell Labs a number of people were looking at ways to add programming capabilities such as variables and control flow primitives to the original shell. One day [mid 1975?] Dennis [Ritchie] and I came out of a meeting where somebody was proposing yet another variation by patching over some of the existing design decisions that were made in the original shell that Ken wrote. And so I looked at Dennis and he looked at me and I said “you know we have to re-do this and re-think some of the original design decisions that were made because you can’t go from here to there without changing some fundamental things”. So that is how I got started on the new shell.

Was there a particular problem that the language aimed to solve?

The primary problem was to design the shell be a fully programmable scripting language that could also serve as the interface to users typing commands interactively at a terminal.

First of all, it needed to be compatible with the existing usage that people were familiar with. There were two usage modes. One was scripting and even though it was very limited there were already many scripts people had written. Also, the shell or command interpreter reads and executes the commands you type at the terminal. And so it is constrained to be both a command line interpreter and a scripting language. As the Unix command line interpreter, for example, you wouldn’t want to be typing commands and have all the strings quoted like you would in C, because most things you type are simply uninterpreted strings. You don’t want to type ls directory and have the directory name in string quotes because that would be such a royal pain. Also, spaces are used to separate arguments to commands. The basic design is driven from there and that determines how you represent strings in the language, which is as un-interpreted text. Everything that isn’t a string has to have something in front of it so you know it is not a string. For example, there is $ sign in front of variables. This is in contrast to a typical programming language, where variables are names and strings are in some kind of quote marks. There are also reserved words for built-in commands like for loops but this is common with many programming languages.

So that is one way of saying what the problem was that the Bourne Shell was designed to solve. I would also say that the shell is the interface to the Unix system environment and so that’s its primary function: to provide a fully functional interface to the Unix system environment so that you could do anything that the Unix command set and the Unix system call set will provide you. This is the primary purpose of the shell.

One of the other things we did, in talking about the problems we were trying to solve, was to add environment variables to Unix system. When you execute a command script you want to have a context for that script to operate in. So in the old days, positional parameters for commands were the primary way of passing information into a command. If you wanted context that was not explicit then the command could resort to reading a file. This is very cumbersome and in practice was only rarely used. We added environment variables to Unix. These were named variables that you didn’t have to explicitly pass down from the parent to the child process. They were inherited by the child process. As an example you could have a search path set up that specifies the list of directories to used when executing commands. This search path would then be available to all processes spawned by the parent where the search path was set. It made a big difference to the way that shell programming was done because you could now see and use information that is in the environment and the guy in the middle didn’t have to pass it to you. That was one of the major additions we made to the operating system to support scripting.

How did it improve on the Thompson shell?

I did change the shell so that command scripts could be used as filters. In the original shell this was not really feasible because the standard input for the executing script was the script itself. This change caused quite a disruption to the way people were used to working. I added variables, control flow and command substitution. The case statement allowed strings to be easily matched so that commands could decode their arguments and make decisions based on that. The for loop allowed iteration over a set of strings that were either explicit or by default the arguments that the command was given.

I also added an additional quoting mechanism so that you could do variable substitutions within quotes. It was a significant redesign with some of the original flavor of the Thompson shell still there. Also I eliminated goto in favour of flow control primitives like if and for. This was also considered rather radical departure from the existing practice.

Command substitution was something else I added because that gives you very general mechanism to do string processing; it allows you to get strings back from commands and use them as the text of the script as if you had typed it directly. I think this was a new idea that I, at least, had not seen in scripting languages, except perhaps LISP.

Page Break

How long did this process take?

It didn’t take very long; it’s surprising. The direct answer to the question is about maybe 3-6 months at the most to make the basic design choices and to get it working. After that I iterated the design and fixed bugs based on user feedback and requests.

I honestly don’t remember exactly but there were a number of design things I added at the time. One thing that I thought was important was to have no limits imposed by the shell on the sizes of strings or the sizes of anything else for that matter. So the memory allocation in the implementation that I wrote was quite sophisticated. It allowed you to have strings that were any length while also maintaining a very efficient string processing capability because in those days you couldn’t use up lots of instructions copying strings around. It was the implementation of the memory management that took the most time. Bugs in that part of any program are usually the hardest to find. This part of the code was worked on after I got the initial design up and running.

The memory management is an interesting part of the story. To avoid having to check at run time for running out of memory for string construction I used a less well known property of the sbrk system call. If you get a memory fault you can, in Unix, allocate more memory and then resume the program from where it left off. This was an infrequent event but made a significant difference to the performance of the shell. I was assured at the time by Dennis if this was part of the sbrk interface definition. However, everyone who ported Unix to another computer found this out when trying to port the shell itself.

Also at that time at Bell Labs, there were other scripting languages that had come into existence in different parts of the lab. These were efforts to solve the same set of problems I already described. The most widely used “new” shell was in the programmer’s workbench -- John Mashey wrote that. And so there was quite an investment in these shell scripts in other parts of the lab that would require significant cost to convert to a the new shell.

The hard part was convincing people who had these scripts to convert them. While the shell I wrote had significant features that made scripting easier, the way I convinced the other groups was with a performance bake off. I spent time improving the performance, so that probably took another, I don’t know, 6 months or a year to convince other groups at the lab to adopt it. Also, some changes were made to the language to make the conversion of these scripts less painful.

How come it fell on you to do this?

The way it worked in the Unix group [at Bell Labs] was that if you were interested in something and nobody else owned the code then you could work on it. At the time Ken Thompson owned the original shell but he was visiting Berkeley for the year and he wasn’t considering working on a new shell so I took it on. As I said I was interested in language design and had some ideas about making a programmable command language.

Page Break

Have you faced any hard decisions in maintaining the language?

The simple answer to that is I stopped adding things to the language in 1983. The last thing I added to the language was functions. And I don’t know why I didn’t put functions in the first place. At an abstract level, a command script is a function but it also happens to be a file that needs to be kept track of. But the problem with command files is one of performance; otherwise, there’s not a lot of semantic difference between functions and command scripts. The performance issue arises because executing a command script requires a new process to be created via the Unix fork and exec system call; and that’s expensive in the Unix environment. And so most of the performance issues with scripting come from this cost. Functions also provide abstraction without having a fork and exec required to do the implementation. So that was the last thing I added to the language.

Any one language cannot solve all the problems in the programming world and so it gets to the point where you either keep it simple and reasonably elegant, or you keep adding stuff. If you look at some of the modern desktop applications they have feature creep. They include every bell, knob and whistle you can imagine and finding your way around is impossible. So I decided that the shell had reached its limits within the design constraints that it originally had. I said ‘you know there’s not a whole lot to more I can do and still maintain some consistency and simplicity’. The things that people did to it after that were make it POSIX compliant and no doubt there were other things that have been added over time. But as a scripting language I thought it had reached the limit.

Looking back, is there anything you would change in the language's development?

In the language design I would certainly have added functions earlier. I am rather surprised that I didn’t do that as part of the original design. And the other thing I would like to have done is written a compiler for it. I got halfway through writing a shell script compiler but shelved it because nobody was complaining about performance at the time.

I can’t think of things that we would have done particularly differently looking back on it. As one of the first programmable scripting languages it was making a significant impact on productivity.

If the language was written with the intention of being a scripting language, how did it become more popular as an interactive command interpreter?

It was designed to do both from the start. The design space was you are sitting at the terminal, or these days at the screen, and you’re typing commands to get things done. And it was always intended that that be one of the primary functions of the shell. This is the same set of commands that you’re accessing when you’re in a shell script because you’re (still) accessing the Unix environment but just from a script. It’s different from a programming language in that you are accessing essentially the Unix commands and those capabilities either from the terminal or from the script itself. So it was originally intended to do both. I have no idea which is more popular at this point; I think there are a lot of shell scripts around.

Many other shells have been written including the Bourne Again shell (Bash), Korn Shell (ksh), the C Shell (csh), and variations such as tcsh. What is your opinion on them?

I believe that Bash is an open source clone of the Bourne shell. And it may have some additional things in it, I am not sure. It was driven (I’m sure everybody knows this) from the open source side of the world because the Unix licence tied up the Unix intellectual property (source code) so you had to get the licence in order to use it.

The C shell was done a little after I did the Bourne shell – I talked to Bill Joy about it at the time. He may have been thinking about it at the same time as I was writing sh but anyway it was done in a similar time frame. Bill was interested in some other things that at the time I had less interest in. For example, he wanted to put in the history feature and job control so he went ahead and wrote the C shell. Maybe in retrospect I should have included some things like history and job control in the Unix shell. But at the time I thought they didn’t really belong in there … when you have a window system you end up having some of those functions anyway.

I don't recall exactly when the Korn shell was written. The early 80s I suspect. At the time I had stopped adding “features” to sh and people wanted to continue to add things like better string processing. Also POSIX was being defined and a number of changes were being considered in the standard to the way sh was being used. I think ksh also has some csh facilities such as job control and so on. My own view, as I have said, was that the shell had reached the limits of features that could be included without making it rather baroque and certainly more complex to understand.

Page Break

Why hasn’t the c shell (and its spawn) dropped off the edge of the planet? Is that actually happening?

I don’t know, is it? There are a lot of scripts that people would write in the C shell. It has a more C-like syntax also. So once people have a collection of scripts then it’s hard to get rid of it. Apart from history and job control I don’t think the language features are that different although they are expressed differently. For example, both languages have loops, conditionals, variables and so on. I imagine some people prefer the C-style syntax, as opposed to the ALGOL68-like syntax of the shell.

There was a reason that I put the ALGOL-like syntax in there. I always found, and this is the language design issue, that I would read a C program and get to a closing brace and I would wonder where the matching opening brace for that closing brace was. I would go scratching around looking for the beginning of the construct but you had limited visual clues as to what to look for. In the C language, for example, a closing brace could be the end of an if or switch or a number of other things. And in those days we didn’t have good tools that would allow you to point at the closing brace and say ‘where’s the matching opening brace?’. You could always adopt an indenting convention but if you indented incorrectly you could get bugs in programs quite easily because you would have mismatching or misplaced brace. So that was one reason why I put in the matching opening and closing tokens like an if and a fi -- so all of the compound statements were closed and had unique closing tokens.

And it was important for another reason: I wanted the language to have the property that anywhere where there was a command you could replace it with any closed form command like an if … fi or a while ... do ... done and you could make that transformation without having to go re-write the syntax of the thing that you were substituting. They have an easily identifiable start and end, like matching parentheses.

Compare current UNIX shells (programs that manipulate text) and new MS Windows Power Shell (classes that manipulate objects). Would UNIX benefit from a Power Shell approach?

The Unix environment itself doesn’t really have objects … if you look at what the shell is interfacing to, which is Unix. If objects are visible to the people writing at the shell level then it would need to support them. But I don’t know where that would be the case in Unix; I have not seen them. I imagine in the Microsoft example objects are a first class citizen that are visible to the user so you want to have them supported in the scripting language that interfaces to Windows. But that is a rather generic answer to your question; I am not specifically familiar with the power shell.

Is Bash a worthy successor to Bourne shell? Should some things in Bash have been done differently?

I believe you can write shell scripts that will run either in the Bourne shell or Bash. It may have some additional features that aren’t in the Bourne shell. I believe Bash was intended as a strictly compatible open source version of the Bourne shell. Honestly I haven’t looked at it in any detail so I could be wrong. I have used Bash myself because I run a Linux/Gnu system at home and it appears to do what I would expect.

Page Break

Unix Specialist Steve Parker has posted 'Steve's Bourne / Bash scripting tutorial' in which he writes: Shell script programming has a bit of a bad press amongst some Unix systems administrators. This is normally because of one of two things: a) The speed at which an interpreted program will run as compared to a C program, or even an interpreted Perl program; b) Since it is easy to write a simple batch-job type shell script, there are a lot of poor quality shell scripts around. Do you agree?

It would be hard to disagree because he probably knows more about it than I do. The truth of the matter is you can write bad code in any language, or most languages anyway, and so the shell is no exception to that. Just as you can write obfuscated C you can write obfuscated shell. It may be that it is easier to write obfuscated shell than it is to write obfuscated C. I don’t know. But that’s the first point.

The second point is that the shell is a string processing language and the string processing is fairly simple. So there is no fundamental reason why it shouldn’t run fairly efficiently for those tasks. I am not familiar with the performance of Bash and how that is implemented. Perhaps some of the people that he is talking about are running Bash versus the shell but again I don’t have any performance comparisons for them. But that is where I would go and look. I know when I wrote the original implementation of the shell I spent a lot of time making sure that it was efficient. And in particular with respect to the string processing but also just the reading of the command file. In the original implementation that I wrote, the command file was pre-loaded and pre-digested so when you executed it you didn’t have to do any processing except the string substitutions and any of the other semantics that would change values. So that was about as efficient as you could get in an interpretive language without generating code.

I will say, and it is funny because Maurice Wilkes asked me this question when I told him what I was doing, and he said ‘how can you afford to do that?’ Meaning, how can you afford to write programs when the primitives are commands that you are executing and the costs of executing commands is so high relative to executing a function in a C program, for example. As I have said earlier, the primary performance limitation is that you have to do a Unix fork and exec whenever you execute a command. These are much more expensive than a C function call. And because commands are the abstraction mechanism, that made it inefficient if you are executing many commands that don't do much.

Where do you envisage the Bourne shell's future lying?

I don’t know; it’s a hard question. I imagine it will be around as long as Unix is around. It appears to be the most ubiquitous of the Unix shells …What people tell me is if they want one that is going to work on all the Unix systems out there in the world, they write it in the Bourne shell (or Bash). So, that’s one reason. I don’t know if it is true but that is what they tell me. And I don’t see Unix going away any time soon. It seems to have had a revival with the open source movement, in particular the GNU Project and the Linux kernel.

Page Break

Where do you see shells going in general?

As I have said the shell is an interface to the Unix environment. It provides you with a way of invoking the Unix commands and managing this environment interactively or via scripts. And that is important because if you look at other shells, or more generally scripting languages, they typically provide access to, or control and manipulate, some environment. And they reflect, in the features that are available to the programmer, the characteristics of the environment they interface to. It’s certainly true the Unix shells are like that. They may have some different language choices and some different trade offs but they all provide access to the Unix environment.

So you are going to see languages popping up and shells popping up. Look at some of the developments that are going on with the Web – a number of languages have been developed that allow you to program HTML and program Web pages, such as PHP. And these are specific to that environment. I think you are going to see, as new environments are developed with new capabilities, scripting capabilities developed around them to make it easy to make them work.

How does it feel to have a programming language named after you?

People sometimes will say to me ‘oh, you’re Steve Bourne’ because they are familiar with the shell. It was used by a lot of people. But you do a lot of things in your life and sometimes you get lucky to have something named after you. I don't know who first called it the Bourne shell.

I thought it was you that named it Bourne?

No. We just called it the shell or sh. In the Unix group back in the labs I wrote a couple of other programs as well, like the debugger adb, but we didn’t call that the Bourne adb. And certainly we didn’t call it the Aho awk. And we didn’t call it Feldman make. So I didn’t call it the Bourne shell, someone else did. Perhaps it was to distinguish it from the other shells around at the time.

Page Break

Where do you see computer programming languages heading in the future, particularly in the next 5 to 20 years?

You know I have tried to predict some of these things and I have not done very well at it. And in this business 20 years is an eternity. I am surprised at the number of new entrants to the field. I thought that we were done with programming language designs back in the late 70s and early 80s. And maybe we were for a while. We had C, C++ and then along comes Java and Python and so on. It seems that the languages that are the most popular have a good set of libraries or methods available for interfacing to different parts of the system. It is also true that these modern languages have learned from earlier languages and are generally better designed as a result.

Since I was wrong in 1980 when we thought ‘well we are done with languages, let’s move on to operating systems, object oriented programming, and then networking’ and whatever else were the other big problems at the time. And then suddenly we get into the Internet Web environment and all these things appear which are different and improved and more capable and so on. So it is fun to be in a field that continues to evolve at such a rapid pace.

You can go on the Internet now and if you want to write, for example, a program to sort your mail files, there is a Python or Perl library you will find that will decode all the different kinds of mail formats there are on the planet. You can take that set of methods or library of functions and use it without having to write all the basic decoding yourself. So the available software out there is much more capable and extensive these days.

I think we will continue to see specialised languages; such as PHP which works well with Web pages and HTML. And then look at Ruby on Rails. Who would have thought LISP would come back to life. It is fun to be an observer and learn these new things.

Do you think there are too many programming languages?

Maybe. But the ones that are good will survive and the ones that aren’t will be seen as fads and go away. And who knows at the time which ones are which. They are like tools in a way; they are applicable in different ways. Look at any engineering field and how many tools there are. Some for very specific purposes and some quite general.

The issue is what set of libraries and methods are available to do all the things you want to do? Like the example I gave about mail files. There are dozens of things like that where you want to be able to process certain kinds of data. And so you want libraries to do things. For example, suppose you want a drawing package. And the question is: what do you want to use the drawing package for? If you are going to write programs to do that do you write them in Perl or Python or what. So it is going to be driven as much by the support these languages have in terms of libraries and sets of methods they have as it by the language itself.

If you were teaching up-and-coming programmers, what would you say?

First, I would be somewhat intimidated because they all know more than I do these days! And the environments today are so much more complicated than when I wrote code. Having said that software engineering hasn’t changed much over the years. The thing we practised in the Unix group was if you wrote some code then you were personally accountable for that code working and if you put that code into public use and it didn’t work then it was your reputation that was at stake. In the Unix lab there were about 20 people who used the system every day and we installed our software on the PDP 11 that everyone else was using. And if it didn’t work you got yelled at rather quickly. So we all tested our programs as much as we could before releasing them to the group. I think that this is important these days – it’s so easy in these large software projects to write code and not understand the environment you will be operating in very well, so it doesn’t work when you release the code in the real world. That is one piece of advice I’d give is to make sure you understand who is using your code and what they will use it for. If you can, go and visit your customers and find out what they are doing with your code. Also be sure to understand the environment that your program will be deployed into. Lastly, take pride in your code so that your peers and customers alike will appreciate your skill.