This post is Part 3/3 describing our roadmap to self-hosting AIs.
By 2020, the Sparkz marketplace will host an AI that programs itself, i.e., it will be a self-hosting AI. In this post, I’ll describe the version of self-hosting that is possible. We’ll build on the neural programmer and its takeaways from earlier posts.
Roadmap to a self-hosting AI
Our neural programmer would work equally well over Python. In fact, we should be able to train the same network simultaneously over many languages. Multiple languages should make it easier for the network to learn the crux of the algorithm without getting stuck on syntactic vagaries. We’ll keep datasets separated by programming paradigm. Mixing imperative, functional, and logic languages together will make learning very difficult. Within a single paradigm though, e.g., all imperative languages, generalization across languages should be feasible.
We are already at capabilities of 50-100 lines. Extending to 500-1000 lines should be straight-forward. At that point, we will train it over all publicly available Tensorflow, Keras, PyTorch DNN models. This unlocks a neural programmer that can generate DNN models.
We are then one step away from the neural programmer being self-hosting. Self-hosting in compilers and OSes is a textbook topic going back to LISP 1.5, 1962. In AI, we have not yet achieved self-hosting.
By 2020, we will have built a self-hosting AI.
Self-hosting for an AI will be analogous to self-hosting in compilers. Aside from generating itself, it will be capable of generating specialized AIs. It will generate versions of object classifier, face recognition, hand-writing, voice recognition systems. We do not expect the generated AIs to compete with hand-tuned state-of-the-art versions written by experts. We don’t have to. For low level systems code, there still exist experts who write assembly for performance. The majority of the programmer let compilers care of optimization. We would expect a similar scenario for AIs.
Self-hosting does not imply recursive self-improvement, or AGI. Those have undefined specifications not worth debating. I agree with Rodney Brooks’ take on the topic.
“Creative” self-hosting AI takes noise as input
When discussing the inputs to the neural programmer, I asserted that we should counter-intuitively aim for the vaguest specifications possible. My preference in decreasing order is
English text >
IO examples >
Declarative formal specs >
Conceivably, we could go all the way to no specifications, i.e.,
Noise as input. If our neural programmer is incapable of generate non-functioning or non-compiling code, we can view the neural programmer as just a mapper from the infinite space of noise to the infinite space of functional code.
We would then sample noise and let it generate code. It would be fun to setup communication access with public servers on the internet, or crypto-currency networks, and put it in a reinforcement learning loop.
The toy neural programmer and compiler were built within
The output generated has
1.4k lines of distinct code. If we imagine 100x more data, a network 10x bigger to absorb the semantic content, 10x better theorem proving (5x more compute and 2x more constraints), then output code of 10k-100k lines should be within reach.
That would make a self-hosting AI accessible with some improvements to the state-of-the-art. The main technology to push forward is mixing of SGD with SMT solvers.
While self-hosting, I do not anticipate this scaled system to be creative. First, it needs to be capable of writing 100k line programs. Next, it should take noise as its exclusive input. Lastly, it needs to be incapable of writing broken programs. Then, we might see some primitive mechanical creativity. Uniform noise or noise with slight structure should result in some intriguing output programs.
With noise as input, self-hosting AIs might show mechanical creativity.
Sourcing noise is easy, for humans and machines. Self hosting AIs will be non-boring at that stage.
Follow us on sparkz_ai
Come work with us. Email email@example.com