Just some thought... for a toy project, which I might implement in the coming years, maybe.
This post might not make much sense - at least not today - but I write it down anyway - for the record. Let's see if 10 years from now this will be the main driving force behind every AI on the planet - or I was wrong. I am not sure, but something tells me (lot of things) that this idea might not be completely crazy.
I think Sodium (or in the cloud - monix) might lead to some revolutionary AI breakthrough.
Keywords why:
- compositional
- dynamic network
- reusabililty
- local learning
- unsupervised measure of goodness
- hierarchy
- the learning algorithm is an artificial brain itself.
Ok. These are keywords. I don't wanna get into the details.
But just that you know that I am only half crazy, here is some proof.
I really don't wanna write 5 pages of text on all the reasons why this makes sense.
The main idea is that :
* current deep neural networks lack in expression power, they implement an algorithm encoded into the neuron weight parameters, which implements "some algorithm" (such as translation, rotation, scaling, whatnot), but these networks will never implement a haskell compiler ... the computational language that deep neural networks are allowed to use is nothing more than a set of numbers in some sequence, and one simple operation (which is defined in a neuron, a non linear function), this is a language which is very stupid, low level, so the idea is to use a more expressive language instead of the current stupid language which is very wasteful, imagine you have to write the same rotation algorithm 1000 times because you cannot parametrize it by what degree you want to rotate it, neural networks are restricted to this, they have to optimize 1000 parameters, just to implement the same 1000 rotation algorithms, each rotating something by a different angle, say in 5 dimensions (which is a 5x5 matrix, 25 numbers, but only 2 degrees of freedom), currently convolutional layers are used because of this reason, they share parameters, they are used to implement translation invariance, but where these convolutional layers are is fixed... imagine writing an algorithm by not allowing to use functions or varieables, and you are only allowed to use multiplication and non appliy a non linear function, but you cannot create your own function and parametrize it, you canot create loops, you cannot have variables which you can update... this is the programming language that deep networks write their machine vision algorithms...
* it starts to become clear why neural networks work, but that idea has nothing to do with deep networks, current deep networks work by accidently doing the right thing but this right thing can be applied to more powerful computational models than just deep networks, however, up until now, nobody knew what this right thing was, because nobody knew how the hell it is possilbe that deep networks actually do work...
The most important part is, however, that without Sodium, implementing a code, that uses all the right principles is a lot of work and people won't even try to do it, especially the current ML community who are busy with writing assemby code for GPUs to make the current DNN architecture go "faster".
In order for scientist to be able to write code which implements advanced concepts they will need to use higher order FRP + FP (monads and their friends). So, either they will, sooner or later, implement all these thing themselves, without even realising what they are doing, "discover FRP", in other words, rediscovering the wheel, which will take them 10 years, and then another 5 years later they realise that it was already invented in 1980.
Or somebody realises that if they combine the right already existing technologies that already exist today, with the right type of understanding which we learned in the last 20 years, then that can lead to an AI explosion.
The reason why this is not going to happen because scientists don't bring the right ideas together and the PhD students who write the AI code do not even know that FRP exist, so they write a PhD in 3 years, they use C++ to try out yet another type of neural network but without a paradigm shift in programming AI those 3 years spent for improving AI science will bring only very little because we all know how much it takes to write a super complex algorithm in C++ versus writing it in Sodium, if the algorithm is all about data transformation and optimising data transforming networks - like our brain. So, IMHO, sooner or later Sodium (FRP) will be either re-discovered by the ML community or they will realise that there is no other way forward.
This is my bet. Let's come back 5 years from now and see if I was right.
Also, as a hobby project, in the coming years, I might write some AI algorithm which is written in Sodium and not in python or C++ or even plain haskell, but heavy use of FP on top of FRP, combined with some clever learning algorithms and goodness measures (which is explained in that video ).
Ok, food for thought. I won't write this code myself pretty soon, but maybe I write some protype in the coming years. I am pretty sure if will take 10-15 years from the ML community to come to this realisation on their own.
Unless they read this forum post.
Ok, I wrote this down for the record, in maybe in the coming years I try to make some prototype, and reimplement some deep neural network in Sodium in 5 lines of code, reusing existing algorithms for scientific computing. That is the idea, Sodium would be used for making scientific computing compositional and pure and reusable - by combining existing scientific libraries using Sodium, in a dynamical way, where the network itself can by dynamic, and the network itself can decide how it changes the itself in some sort of evolutionary manner.
Bottom line, there is lot of understanding and scientific computing implementations out there but they should be lifted into an FRP monad
Otherwise the cost of implementing and exploring new AI algorithms will stay very high, which currently is the main bottleneck in the speed of AI algorithm development.
Over and out... I personally, might implement something like this, in the coming years. Not right now. I wrote this down for the sake of the record... just to have a good laugh when 10 years from now there will be a Nature paper on how awesome FRP is for AI.
Have a good one, I go back and do my Scala.js + Sodium project.