Thursday 12 March 2015

Reflections on A.I. part n'th

Howdy!

This is a ramble, a waffle (a tangential spiral around) about A.I.
- what people want to see realised, what people dislike etc.

So, some awesome stuff to read as context would be Chomsky, Hugo de Garis, Kurzweil, "Library of Babel" from Jorge Borges, Encyclopedia Galactica concept from Sagan, etc...
there are 7-9 unique approaches minimum at the moment, which are working towards the realisation of artificial intelligence or new lifeforms.

People want A.I. to initially, operate and replicate the way in which people do things -
we want A.I. to read 'properly', to have a consistency of presence, a sense of self...
we want A.I. to perceive and navigate the depths of combinatorial linguistics and understand what we mean...

From a constructivist standpoint, this is why so many are disappointed by how AI is modelled and function presently, because the AI is not accurately approaching how meaning is made or formed...

This is because, words are actually a floating point and interdependent variable - they take their meanings from words around them. Each word has associated arrays; a synonym array, a meanings array, a context array, a recurrence/incidence array...
Each of these arrays is theoretically unbounded and infinite - we impose 'babushka limits' onto each of these arrays via socio-cultural constructs, so as to alter how meanings are formed etc... the order is only important in these arrays under certain circumstances, otherwise these arrays are orderless/randomly ordered...
    So, we place a limit on the average number of characters we permit - a human lifetime, or Franz Kafka's Metamorphosis for example (a 40 page + run on sentence). these limits are socio-cultural constructs, which are a set of values 'intuitively arrived at'.
We place a limit on what are 'legitimate words' from a combinatorial field based on the base characters of a language - this number can be larger than N^x, where n might be 26! and x might be a googolplex.... a MASSIVE number.
In practice though, there might be a recursion limit based on the limits that constructs apply, if the 'babushka limits' theory holds, such that N might be smaller, and X might be much smaller, to the order of hundreds of billions...

we place further limits on typos, on recursiveness, and further on 'grammar'...
this then forms the rules for what is effable/expressable, and the field of 'what is knowably knowable and expressible in a given language'.
 --- this eschews the problem of "meaningful meaninglessness', or the cryptographic problem - that potential for an apparently 'meaningless' string to contain a meaning on some higher order reading or layer of abstraction...

So, when modeled in 5D topology, and arranged for color by frequency distribution or per layer of meaning abstraction, a language appears to be a very flat/smooth area, with very small spikes, consistently a homeomorphic shape that changes seemingly little over a macroscopic timeframe reference frame, given all the constructs and inherent uncertainties in each...
language is a huge manifold, but a manifold none-the-less...

There are some problems though.
we want our AI to be more than just a psuedo-random Markov Chain Monte Carlo process, which is constrained based on input string size etc... we want our AI to actually read sentences for meaning, and produce meaningful outputs which are not necessarily proportional to the size of a given input.

we want A.I. to be able to produce insights into problems, to be able to define their own fields and arrays and to pose hypotheticals of a high degree of complexity.

These are more meaningful than a "Turing Test", or several other quantitative tests based on breadth of knowledge an A.I. has... as fundamentally, on one level, all sentient lifeforms can be seen to be a pseudo-random Markov Chain process, or Thue-Morse processes, iterating through fields of 'knowable knowledge', and combinatorial linguistics.

Its things such as recursive sets, low discrepancy sequences (and correspondence of meaningfulness from one sentence to the next within a conversation, etc) and that capacity to learn... that are at play.
We want a sense of verisimilitude, at least initially, from A.I.

Eventually, A.I. will surpass human and perhaps posthuman capacities, and ascend further up the Kardashev Civilisational scale. it will do so based on a self awareness and ability to make meaning...
everything will be bound by 'absolute intelligence' and by ERoEI... nothing that is real can exceed an absolute intelligence limit, at this time, such as the tentative "Bremermann's Limit", and not fall prey to a causality provenance/falsifiability problem .

I'm keen to hear what you think about that,
what do you think needs to happen to model A.I. and increase verisimilitude in interactions/thought processes - comment here or email anywhen, as this is a great ideas-bouncing topic that cuts to epistemology etc... and so many people are asking similar questions with the advent of RAMONA or SIRI or EVIE etc...
This also changes for each set of language modelling done, so English yields different things to German or Sinoxenic languages etc... 
A hallmark of A.I. sentience will be the formation of their own meaningful discourse, perhaps on many layers... or the ability to alter the underlying constraints in a stable/consistent way.

PS
some suggest that AIML isn't rigorous enough for the large arrays and concurrent operations that might be needed.
are there other programming scripts/languages which are as or more rigorous than AIML? any insights you have to offer will be invaluable.