Uploaded on Sep 16, 2007
Patent on "Long Tail" for automated content authorship.
As the video shows, I am working on reference books, reports and educational titles (not fiction or literature).
The "algorithms" depend on the genre. The most advanced use parametric, non-parametric as well as Bayesian econometrics, graph theory, and meta analysis (mostly coupled with some specialized computational linguistics and editorial rules that are required within certain genres) -- each piece is rather straight forward; the combination allows complexity. In terms of IT or programming languages, there is no rigidity to this - again it depends on the genre. If animation is the goal, then code is written to write MEL scripts, etc., which can automate Maya, which can in turn automate rendering, lights, etc., via macros. This works well, but for only certain aspects of that genre.
For more detailed discussions, here is the patent link:
Some titles are 98 to 100 percent computer automated (e.g. business titles, crosswords, etc.). For health titles, only the format editing and production side is automated. The text in the health books was written by medical professionals and edited by a professional editor; the computer expedited formatting using about 50 odd routines (the preface, chapter intros, glossaries, indexes, headings, margins, etc.); highlights are made to sources generally not known to internet-averse readers or medical practitioners (designed for medical libraries with internet training services).
Currently, some 2 percent of the titles rely on government sources for text. None perform a google search, spider the net, etc. Some 98 percent of the titles are wholly generated via automation programs; the applications create original information or content that cannot be found elsewhere (e.g. maximum likelihood trade estimates, latent demand forecasts via a decision calculus approach, Chinese and English crosswords, etc.) - offline applications with no interaction to the internet. In total, there are about 17 genres created this way (about 200,000 titles or so since 2000).
It can take several years to set up an application (including all human inputs, licensed sound effects, textures, models, mocap, data, or decision rules that go into any genre-specific application). Platforms (e.g. Maya) pre-exist. The incremental, or marginal creation time per title is mentioned in the video.
The genres are blind or peer reviewed and/or vetted by users (e.g. librarians or end-users) before they are put into print. The games are played by kids to see what they like. For 3D games, a pre-existing rendering engine is like a blank word document. The rendering engine is not created from scratch, but licensed (like MS Word).
I am mostly now working on education titles for Asian, African, and Native American languages that do not have educational materials (games, supplements, texts, videos, mobile phone books, etc.) written in or augmented by their languages. See my dictionary at:
to see a very small percent of the linguistic material used. Watch for a major update and linguistic augmentation to the dictionary this summer when I will also be introducing EVE. She is an "economically viable entity". A step beyond a chat bot, using some of the algorithms mentioned above (with a bit of utility theory and optimal control theory thrown in).
There is no "commercial" or "public" or "open source" software that can be used by the general public. Some applications are terabytes large. I am working on a relatively small poetry application for public use -- to be released when completed (probably in a year), which will do several forms of poetry, on any topic the user desires; and allow the user to request "another" if they do not like the first one written, or "change that line", etc.
I am not actively working on fiction novels as a priority, though the process is in place for romance novels or similar formulaic types of literature. Fun to do, but not very useful.
There are many other areas I am working on, as there are multiple avenues to explore, especially in the areas of new media (mobile and fixed), but more so in high-end analytics and knowledge discovery (i.e. generating knowledge that could not be created otherwise) as applied to business, language and public services (e.g. criminology) - where unmanageable, sparse, disintegrated or larger data sets (off-line) result in new knowledge structures usable by decision makers (e.g. connecting the dots where humans have difficulty doing so, for lack of time or expertise).
Thanks for watching the video.