AI – a measured evaluation

AI is a highly relevant topic in today’s discourse; however, it is most heavily sensationalised with a focus more on hyperboles then on actual capability. Further, the discourse is thoroughly lacking any examination of the limits of AI and how AI, constrained by these limits, will actually impact society. I aim to offer this examination here.

To that end, the most fundamental thing to be aware of is the halting problem and its implications. In the halting problem, in simplified terms, it is proposed that we put a piece of code ‘a’ into a computer and then have a different piece of code ‘b’ in the computer decide whether code ‘a’ stops or not without running the code. It is probably impossible for any such code ‘a’ to exist that can solve the halting problem. This is true for any computer, including quantum computers.

To illustrate this point further let us look at the following bit of python code:

x = 1

while x > 0:

   x = x + 1

   print (x)

Here our starting value for x is 1. We then have a loop that will keep adding 1 to x for as long as x is larger than 0 and show us the value of x after adding 1 to it. We as humans can clearly see since we are only adding positive numbers to x and x is larger than 0 to begin with this loop will never stop without having to calculate every value possible, a computer however cannot do that and will keep running until it crashes, turns off, or something like an integer overflow occurs because the numbers is essentially too large for the computer to handle.

Now, that computational science experiment tells us one important thing about computers and AI. First, they cannot ‘comprehend’ or even work with abstract concepts. For example, if a piece of the computer has an error 2+2=5 might be the result of a calculation. A human can clearly—even if by counting on his hand—discern that this is the wrong result, while a computer will continue working with the wrong result.

There are known real life instances where high energy particles from space come down to earth and hit a single transistor in a microchip, flipping a bit and causing such issues. An example to illustrate what “abstractions” can do here is the following: if such an error occurs in navigation equipment causing an error in the altitude, then the autopilot might hurl the plane down towards the ground. A human, seeing the flight path displayed in a graphic manner could then, seeing the abstract graphical representation, fix this issue.

A second aspect to consider, following in part from the prior discussed, is that a computer only does what it is told. A computer will only execute the exact code that has been fed into it, producing the result that that code would output, not necessarily the result one desires.

This does open up the question of how artificial intelligence is then able to take abstract inputs and turn them into outputs, such as ChatGPT, OpenAI and Art bots, which have been used even for writing school essays.

The answer to that is mimicry and human intelligence. AI, in simplified terms, works by taking in a set off input data—for example human written articles—and then trying to reproduce similar content based on the input. First, that requires sorting of the inputs into specific things, and second, it requires the output to be judged as good or bad and adjustments to be made either to the code or on the time spent to improve accuracy.

This process does require an arbiter of good or bad results, in terms of time and accuracy, and the ability to put somewhat arbitrary labels on thing during the sorting of the original set. The sorting of these labels is done by real people. The way the input gets sorted and the ‘AI’ ‘learns’ is by providing them to click farms in low wage countries which then sort through and set up the input sets. Those sets then get put into the ‘AI’ which then, through repeated trial and error, reproduces similar results to the input.

To further illustrate this point let us look at what an art AI generated when set to ‘realistic’ art style with the prompt “Farm”.

At first glance this does indeed look like a farm on a nice summer day, however the closer one looks, the more the thin veneer of verisimilitude comes apart. There is no access road to the farm visible, no roads to access the fields and no tramlines inside the field. Further even, the barn situated in the middle of the image, alongside the small silo besides it, sits within the bounds of a wheat field. This image, for all its worth, has just taken thousands of paintings and pictures of farms, barns, and fields, and stitched something together from them to produce this faulty image. Even with the wealth of information and data the AI has been provided with, it is unable to produce a realistic image with such a simple prompt. One could probably get something much more realistic with more description, but that only proves the point further.

Another example is how AI can become repetitive or give downright wrong answers. There are more examples of this than I could possibly give, and anyone can look online to see them, but I will include a link to an article about Microsoft’s latest AI chatbot and how it has been prone to repetition and error due to the errors which were discussed earlier in this article.

This should make two things clear. One: AI isn’t going to write anything new, especially not code. It will rehash already produced code, text, images etc., and produce mimicked products based on those it was provided with. That means that to create actual truly new creations, even if just to feed into the AI, you will need people. Every image, text, answer, or output by an AI is not “new” but is instead a mimic or amalgamating of pre-existing material, or simply preexisting material edited in some way.

These limitations can be further illustrated with the example of code that does not stop. We can feed a data set of code that does not stop into an AI, but it will never know if code will stop or not, at best providing an estimate, which means such code can still get run on a computer past an AI filter. Even simpler, new code that does not stop is made, which is dissimilar to the data set, requiring constant manual updating of any data set to protect a computer from such an operation.

Considering that a computer only does the very specific thing it is told to do, AI will most likely not replace most jobs, but rather become a new tool on the belt. If you are producing research or other new information it will improve your ability to work through data, though it still relies on you to select any input as well as generate the new data to be used as input.

If you are in a high skill job and are actually making things, AI might speed up the process of designing, but the knowledge of what starting points it can use, the input data, and the parameters of what the output should be will once again depend on a knowledgeable person putting those in.

If you are in the trades or any very open-ended work environment AI will most likely have very limited impact since, like any software, it requires very specific inputs to produce a specific output. Anything that does not have very clear inputs and outputs or can often run into not accounted for situations that will have major issues when trying to apply AI technology to it. In the future AI may become more capable in those regards, based on what technology allows to be provided as inputs, but due to the lack of repetitive predictable situations it will most likely remain relatively unimpactful in this sector.

Overall, that leaves us with a potent technology, but also far away from the fictitious lands of computers writing their own code to run on and the AI apocalypse. Computers are simply not capable of working with abstract concepts or taking general orders, and that process of recognising and sorting things, or converting generalised demands, say, for a specific part into inputs, a computer or AI can use will always require an Intelligent skilled human to provide the intelligence for. An old but somewhat crude description I once heard about computers is still accurate, including for AI: they’re fast idiots, having no understanding of what they are doing but doing things they are instructed to do very quickly.

This is the first piece by our newest author, Benedikt, who is German. Thank you for reading, and if you’d like to see more, consider subscribing.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s