Sorry. I left my headset plugged in, so the audio on the camera did not work.
However I don't want to remove this post because I got an encouraging comment from a viewer.
I take a "devil's advocate" position, and assume that there is no difference between macro processing and language processing - in the sense that macro processors and language processors are universal Turing machines, and can be provided with arbitrarily easy to use user interfaces.
This term has been used at least since 1965, when Strachey published details of his GP/M macroprocessor in Computer Journal.
A simple example of its use would be to convert the Cobol statement "ADD A TO B" to the Fortran statement "B = B + A".
Is there any difference between a "macro processor" and a "language interpreter"? Just as any computer language has general computational power, so does any macro language (or at least we can add commands to it if necessary, until this point is reached).
It is said that the C preprocessor has the power of a universal Turing machine. But this will be of little value to people unless it provides an easy to use interface. A device can be a universal Turing machine without providing an attractive user interface, because it may be too expensive (in today's computer costs) to provide an attractive user interface. Some Turing machines would take the age of the universe to carry out a simple computation - presumably we might get a result like this if we tried to build a user interface using (only) the C preprocessor.
The Forth language was originally implemented in Fortran. It allows the user to add commands to the language interactively.
The Unix command line encourages people to write little programs, and then use them to build other programs, using facilities such as piping.
What kinds of languages can we invent? How much can we do with simple languages based on English?
Forth and Lisp both look to be a long way from standard English.
Is it possible, at last, to use standard English as a computer programming language?
Given an English sentence, we can break this up into "kernel sentences".
Someone in the 1960's suggested using natural language as a specification, and converting this to an implementation in 5 steps. Presumably, we can use natural language for specifying all the intermediate steps. For the final step, we can write "kernel sentences" which we already know will be understood by the system. So it seems that natural language can be used at every step, as the only language we need to use - given that we have worked out already how to convert "kernel sentences" to a computer implementation.
Unix achieved its fame from allowing the user to specify multiple tasks. This was promoted, around the same time, by Brinch Hansen with Concurrent Pascal. Languages such as PL/I, and Burroughs languages, already provided this feature.
Has anyone written a macro processor which provides a "multi tasking" feature? We could implement template matching by using multi-tasking, with "daemons" for each template watching for their occurrence, as in a blackboard system.
But is this the full story? Is there a way that we want the macroprocessor to be able to run other tasks. For example, by running other tasks, we can build a machine running exponential numbers of tasks.
Last year I heard of Peter Wegner's work in the late 1990's, when he said there was a new computing paradigm of interaction rather than computation. I did not agree with him - the machinery for interaction has been well known since Dikstra, "Cooperating Sequential Processes", 1965 - Simula was also invented in the 1960's. However, I now concede that progress has been slow in writing software this way. Certainly Unix is a very good example, as is the Unisys A series computer. If anything Wegner says encourages people to write software as interacting agents, then I applaud wholeheartedly.