Artificial Intelligence and Philosophy: Nature of Meaning

In continuation of this series on AI and philosophy, here I want to about the nature of meaning. As humans we easily interpret language and extract meaning out of sentences. Transferring this skill to machines has been a hot topic recently. Most attempts at doing so only act superficially. For example some focus only on grammar, some on sentiments carried in the sentences etc. To really make AI understand language we have to look at what “understanding” actually means. Fortunately there has been a long philosophical background to this already.

Both Indian and Greek philosophical traditions have a long history of speculation on the nature of language and meaning. In Indian philosophical tradition there have been two stream of thought. One stream claims that words independently carry meaning (much like in modern semantic theory). The other school is of the view that the words themselves do not carry meaning until they are placed in a sentence. That is meaning of a word is driven by overall context of the sentences and composition it is found in

There could be several theories of meaning that can be directly used in AI. For example one of the philosophical theories states that meanings are purely mental contents provoked by signs. In other words language is a way to access contents stored in the memory of brain. This is referred to as idea theory. Another theory, the pragmatist theory, says that meaning (or understanding) of a sentence is determined by the consequences of its application. In other words language is a way to invoke certain functions.

A thorough study of these philosophical theories of meaning is required to be able to leverage some of this work in AI especially in the case of NLP.


FAST vs. Other Architectures


Resources and functions are two different ways to access the functionality of server Everything is a resource
Functional calls will not have side effects in FAST architecture Any call can have side effects
A function can be accessed through “parameters” Access is through get and post to resources. URL parameters are inline with RESTful philosophy
Both architectures allow for stateless interaction between client and server
Both architectures allow for interaction over HTTP


A session needs to be maintained between client and server No session needs to be maintained
Calls can have side effects / change sin state Only RESTful calls will have side effects, pure functional calls will not have any side effects
The API needs to be known in advance API can be discovered during interactions
Strong coupling leads to less scalable design No coupling can be used to build scalable systems


Advantages of FAST Architecture

FAST architecture is superior to RESTful architecture in several ways:

  1. Clean design: Segregation of resource management and functional computation allows for modular development and ease in testing.
  2. Efficiency: Parallelization can be done automatically by the server, thus taking the burden of efficiency from the client.
  3. Security: Both resources and functions can have authentication and rights.

Functional, Augmented State Transfer

Following up on previous two posts, we propose a new architecture for combining functional paradigm with RESTful programming. We name it FAST architecture.


Role of each piece of the FAST server is described below:

  • REST API provides a mechanism to post, update, delete and get resources.
  • Some of these resources could be generated dynamically in which case the REST API might interact with the Lambda Machine internally
  • The Lambda machine exposes certain functions to the client
  • The client can request resources or function calls. If it is a function call it is routed to Lambda Machine and if it is a resource requirement it is routed to the REST API

The key ingredients of lambda machine are:

  • Any calls to lambda machine will not have any side effects
  • The state inside lambda machine is immutable. Any mutable state is stores in the REST Server
  • User can directly call lambda machine

This architecture allows following calls to the FAST API:

  • Regular REST methods on resources: PUT, GET, POST, DELETE
  • Apply a function on a set of parameters and get the results
  • Apply a function on a resource
  • Apply a function on a set of parameters and post it to a resource
  • Apply a function on the results of another function

Functional Augmentation to RESTful paradigm

Continuing on the previous posts, RESTful paradigm treats everything as a resource (data and functions both). Any methods applied on a REST server will modify the state and subsequent methods can yield different results. This is not compatible with functional style. If resources are classified as data and functions, then one can implement mutable data and functional calls without any side effects.

Parameters can be passed as part of request or as data objects with URIs. This integrates functional programming with RESTful architecture to some extent. There are two key innovations here:

  1. Segregation of resources as data and functions. This segregation helps in identifying which calls will have side effects and which will not.
  2. Parameter passing through http. This helps in simplifying certain calls where a computation needs to be done on some variables.

Functional resources might remind one of RPC but they are fundamentally different as we need not maintain session between the client and the server and all the intercation can happen over HTTP. This architecture is ideal where data and computation both are equally important.

Integrating Functional and RESTful programming

This is the first of a series of blog articles on combining functional and RESTful paradigms.

RESTful programs are by definition resource oriented. Resource is an abstraction of a computational object. RESTful resources can represent a physical entity, an informational object and even an abstract entity. Resrouce oriented pardigm draws inspiration from the Web and hence with it carries the bias toward documents. For example it will be easy to make call like “Get /mylocation” in a RESTful setup than to make calls like  forecast weather at latitude 74.34564, longitude 34.0900 on 27th December 2016. There could be some RESTful ways of executing the above query but all of them are workarounds and do not adhere to the RESTful spirit.

As an improvement to RESTful services, I think network computation should be slit into two parts:

  1. Pure REST functionality where data is handled through RESTful services
  2. Functional APIs where computation on some parameters is done using functional programming paradigms.

This architecture allows for segregation of data mutations trough REST methods and immutable operations through function calls.

When to use functional, OO and iterative paradigms

Each of the programming paradigms have their own use. Although I’m not a big fan of object oriented programming. Functional programming is highly useful when you want to achieve high level of abstraction. This helps in segregating implementation from specification. But the problem with functional programming is the inefficiency. The inefficiency primarily comes around because of lack of control on implementation. This can be surmounted over time as smart interpreters are built. Iterative programming on the other can give that control to the programmer right away. It is also sometimes easier to build interative programs as over time programmers have been used to coding in this paradigm.

In contrast to these two types of programming, OO does not offer much advantage. The only new thing that it brings to table is the ability to maintain state as part of objects. But that can be achieved through other means.

So my approach to writing analytics code is to use iterative programming but keeping in mind functional paradigms (like abstraction, statelessness, use of firs-class functions etc..). This ensures the code is clean like a functional program at the same time has the efficiency of an iterative program. Python is an ideal language for achieving this.

Functional Progamming and BigData

Map and Reduce have become buzzwords for bigdata processing, although they are not new concepts to computer scientists. Ever since the invention of functional languages, map and reduce have been the delight of computer scientists. The problem is big data stopped at incorporating only these two concepts from functional languages while ignoring several other interesting ones like first-class functions, filter, recursion etc. Some of these can be easily incorporated into bigdata processing techniques.

A nice way to deal with this is by building a parallel interpreter of functional languages. That will help building of parallel algorithms very straight forward. A classic example is using first class functions for building symbolic logic, optimization etc.

Machine Learning and Lambda Calculus

Most of human learning happens through symbols. We do not remember data in quantitative fashion. Even when we remember numbers (like for example phone number or value of Pi) we store it as a series of symbols rather than as float/in values. Our arithmetic calculations are also symbol based. This symbolic representation gives us the power of abstraction. If we want machines to emulate humans, machines should also understand symbols. To some extant this already happens when a variable is a given a name and it is referred by that name in subsequent code. But in this case the machine is not learning that symbol, rather the programmer is in a way hardcoding the symbol. If a machine can truly learn symbols, their associations to combine symbols to form complex symbols and form abstractions, it will be closer to humans in learning ability. In a way digital machines use 0’s and 1’s as symbols at the very base level and create abstractions around them already.

The need for performing operations on symbols has led to the development of Lambda calculus and the language group of Lisp. This was the first major step in AI. Although this happened more than 50 years ago, this approach towards AI has not been given as much importance. The computational world got lost in other aspects like data processing, black-box model fitting (including ANN). There needs to be a revival or symbolic manipulation and lambda calculus for AI to truly progress beyond function fitting.

Machine Learning vs. Machine Intelligence

Is Artificial Intelligence just about beating a person in the game of Go? I guess not, people have been playing that game for a long time in addition to doing several other things at the same time, including learning new things. To be able to beat a person at a game is not a lofty goal to achieve. Anyways, machines have been doing a lot of work not possible by humans already. They are becoming more intelligent over time though.

So going back to the question of Go.. Did the machine learn on its own? the clear answer is no. It was just following an algorithm programmed by its creators. Does that make it non-intelligent. Again the answer is no. Just because the machine didn’t learn doesn’t mean it is not intelligent. A lot of intelligence comes from programmers inputting specific algorithms. Those algorithms sometimes update themselves leading to “learning”. Nevertheless the intelligence comes from the algorithms/programs, whether they learn or not.

A machine able to predict an event is intelligent because of the prediction algorithms. It may not be able to learn to predict new types of events but it still is useful. In a way learning is only a part of intelligence. Intelligence converting knowledge/data into actionable information. Whereas machine learning would just focus on classifying or predicting. Machine intelligence when applied to business can create much bigger impact than pure machine learning. To start with the intelligence has to be fed in by experts.

Cloud is a bigger revolution than mobile

Much of the boom in the tech space is attributed to rapid increase in connectivity through mobile. That is a tremendous underestimation of what is going on right now in the tech world. Most of the ease if setting up and running businesses is because of the evolution of cloud architecture and not just because a user is able to access an app on phone.

Although mobile connectivity is serving the last mile, a significant amount of processing happens before the service reaches to the end user. And all of this has become seamless because of the cheap availability of compute, storage and bandwidth on the cloud. This much ignored fact leads to some misjudgments on several business models. Firstly if a business is serving only the end user without fully leveraging the power of cloud will never be efficient and will lose out to the competition soon enough. On the other hand, businesses purely focused on delivering cloud based services either directly to the users or to other applications will be definitely adding real value to the ecosystem.

In other words for any tech oriented company: “if you are not on cloud, you are dead”. Being on the cloud does not mean putting up a server on AWS or Azure. It involves 1. producing services for internal and external consumption, 2. consuming cloud services where ever possible rather than reinventing the wheel and of course 3. hosting data and compute on the cloud. In other words you have to be part of the cloud ecosystem.