Article | Guy Brown
Four big lessons from World Information Architecture Day 2018
We went to WIAD Manchester 2018 to hear the very latest thinking from the UX community
Article | Guy Brown
About 20 years ago, I studied a joint major in Computer Science and Artificial Intelligence at Sussex University. The course was a mix of both science and philosophy. As it came after the optimism of Artifical Intelligence during the 50s, 60s and 70s, its subsequent decline, and then its renewal in the 90s, it was mostly focused on the practical side of AI.
AI's renewal in the 90s was mostly focused on the practical side of it.
The course covered topics such as Natural Language Processing, Neural Networks and Genetic Algorithms and there was a lot of focus on ubiquitous computing, specifically wearable computers. There was still a strong element of philosophy and the study of what's called Strong AI, True AI or Artificial General Intelligence. However, we were into the era of Weak AI, or Narrow AI, as it is also referred to.
During the first AI hype cycle - which took place between 1956 and 1973 - funding for research was freely available with no real expectation of results by investors. However, this started to dry up in the 70s and created what's now referred to as the AI Winter.
As research shifted focus to the more tangible results of Narrow AI, the 90s ushered in a renewal in the interest and funding available.
There was a famous thought experiment conducted in the 80s by the philosopher John Searle called the Chinese Room. The premise was to refute what he referred to as Strong AI. The premise was…“suppose that artificial intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese.”
It takes Chinese characters as input and, by following the instructions of a computer programme, produces other Chinese characters, which it presents as output. The computer performs its task so convincingly that it comfortably passes the Turing Test: it convinces a human Chinese speaker that the programme is itself a live Chinese speaker.
Developed by Alan Turing in the 1950s, the Turing Test was designed as a test of a machine's ability to exhibit intelligent behaviour, equivalent to or indistinguishable from, a human. A human evaluator judged a natural language conversation between a human and a machine designed to generate human-like responses. The machine passed the test as if it was indistinguishable from the human.
Searle’s question was “does the machine literally "understand" Chinese?” Or is it merely simulating the ability to understand Chinese?”. He calls the first position “Strong AI” and the latter “Weak AI” - what we've referred to as Narrow AI.
Searle then supposes that if he's in a closed room and has an English version of the computer programme, a large batch of Chinese writing, the set of rules, along with all the symbols used, he could receive the Chinese characters from the human evaluator. He could process them according to the programme's instructions, and produce Chinese characters as output.
If the computer had passed the Turing test this way, it stands to reason that Searle would also pass the test by simply running the programme manually. Searle asserts that there's no essential difference between the roles of the computer and himself in the experiment. Each is following the program step-by-step, producing a behaviour which is then interpreted by the evaluator as demonstrating intelligent conversation.
However, as Searle doesn't speak Chinese he doesn't understand the conversation at all. He therefore argues that the computer wouldn't be able to understand the conversation either.
He argues that, “without "understanding" (or "intentionality"), we cannot describe what the machine is doing as “thinking”. Therefore, since it does not think, it does not have a "mind" in anything like the normal sense of the word.”
He concludes that Strong AI is false. Although controversial, I think it highlights some key pointers to where AI research is now and a misunderstanding of current AI in the mainstream.
Without 'understanding' (or 'intentionality'), we cannot describe what the machine is doing as 'thinking'. Strong AI is therefore false.
Fast forward 40 years and we're now well into the second hype cycle for AI.
Once more, lots of money is being poured into AI research as Narrow AI has been able to produce tangible results unlike that of the first hype cycle which didn’t produce the result that it promised.
There has been lots of progress in the previous 10 years and there are now over twenty domains in which Narrow AI programmes are performing as well as, if not better, than humans in doing very specific tasks.
The massive burst of excitement is highly reminiscent of the one that took place during the first hype cycle. Investors are funding billions in AI research and startups and futurists are once again beginning to make alarming predictions about the robots taking over.
Some of the obstacles that ended the first hype cycle still remain over 40 years later and serious advances will be required to overcome them. As with the first cycle, investors may not see the return they expect in the next fifteen years.
Of the two main issues that thwarted the first AI boom, the first one, the issue of computing power, has mostly been addressed. We now have large scale cloud-based computing power on hand to be able to run these AI programmes that require large amounts of CPU power. In fact, many of the concepts used today are based on the ideas of the 50s, 60s and 70s.
The conceptual and theoretical issue still very much exists however, and we still don’t understand how the brain functions and what’s behind creativity, reasoning and humour. Without this understanding it is impossible to know what machine learning programmes should be trying to imitate.
We still don't understand what's behind brain functions, creativity, reasoning and humour. Without this it's impossible to know what machine learning should imitate in the first place.
The current hype around AI is certainly booming and some serious advances are being made in smaller domain spaces such as voice, artificial neural networks and deep learning. This, along with the beginnings and the evolution of AI, is what's causing a misunderstanding and misinterpretation of AI in the mainstream caused by the media and industry, through its dystopian views and audacious claims.
These advances, due to the influence of media and industry, create a mental link in people’s mind between Narrow AI and Strong AI and make us think that there's only a small leap between speaking to Alexa and having a conversation with Hal or Skynet.
AI will no doubt help you in your job in the short term and completely change your job in the medium term. The difficultly that faces most business though is actually understanding what is and isn’t AI and what problems it can help you solve.
AI is often name dropped however not everyone is very clear about how, why or when they are using it when trying to sell you AI based services…
For example, a recent report claimed that 40% of AI startups in Europe don’t actually use AI at all.
The simple answer is that it could be all of those….
It all depends on how they’ve been programmed. For example, Chatbots are often linked to AI but not all chatbots are AI-based. Some are scripted. Some are AI-based but have been poorly trained with inadequate data. Some are so good they can easily pass the Turing Test. As a consumer, it's not always clear which one you're are dealing with.
Everyone is saying that you need AI in your business and they are 100% right. The problems you're trying to solve vary greatly so the types of AI solutions will also vary greatly. The lofty idea of AI can make it difficult to decipher what type of AI systems you need and why in order to solve those problems. As with everything, some are easy to embed into your business and some take a lot of effort.
Systems built from the ground up to solve very narrow problems can be very effective. For example, AI used in the automation of production lines.
However, retrofitting AI into your business and its processes and practices, with a less narrow focus can be a little more difficult. Depending on the problem you're trying to solve, there are off the shelf, ready-made solutions, like image recognition or voice-to-text services from Microsoft Cognitive Services, that can be easier to integrate into your business without having a degree in AI or a team of AI specialists.
But some, such as deep learning systems used to solve specific business issues, need to be completely bespoke and require much more effort and specialist knowledge.
Regardless of the problem or the solution, there's one truth that binds them all together…
The current solutions will only be a good as the data you can feed them. This is something to bear in mind: what data do I have that an AI system can learn from, in order for it to be able to help me with what am I trying to achieve?
Systems such as Artificial Neural Networks and Deep Learning systems can yield amazing results when fed with enough data but may struggle to perform if they don’t have enough of it, or the data is in constant flux.
All AI systems need a good source of data to be able perform optimally!
One of the big areas of investment by businesses in AI is around using technologies such as Deep Learning to predict future trends and patterns to increase or optimise performance.
There are three obstacles that businesses need to overcome to benefit from that investment. Data in big businesses often resides in many different databases and locations and most of the data is noisy and unrefined. AI is likely to fail with this data so the first job for a business is to start harmonising and filtering it.
As previously discussed, AI needs a lot of computational power so in most cases the only viable option would be to use a Cloud-based platform This can cause businesses issues regarding security and compliance, especially with sensitive business data, so a strategy will need to be put in place to overcome it.
Changes in data due to regulation updates or evolving business environments may force the AI system to be rebuilt, retrained and retested. This could cause serious issues due to the lack of new data to train the AI programme with. This may (or may not) influence what businesses decide to use AI systems for…
I think the main focus for AI over the next few years will still be on augmenting human intelligence rather than replacing it in order to get around issues such as ethics. And the issue of data will be a big part of the success of AI. Or at least of the application of Narrow AI systems that are available to the mainstream.
The focus of Narrow AI systems will continue to dominate over the next few years whilst companies are trying to provide a return based on the investments being made, in order to keep the funding coming in.
That’s not to say there won’t be a lot of research into Strong AI, but we are less likely to feel the impact over the next few years. For example, check out Hanson Robotics and ‘Sophia’ if you haven’t already!
I think we’ll also see some amazing examples of AI systems coming to fruition. However, mainstream applications of these will remain a step behind due to the issues surrounding data, ethics, security and privacy.
At the same time I also think we’ll start to see some AI solutions become embedded into everyday life to the point that they will no longer be called AI and become just a part of everyday life.
From a more direct marketing perspective, things such as Voice has long been highlighted as the next big thing. It’s currently just started it’s decent into the Trough of Disillusionment in the hype cycle having not yet gained the traction in reaching its goal of revolutionising commerce.
It will, however, get there. People are still getting used to using voice. Once they are fully comfortable with it and the technology comes of age then the revolution will take place.
And while devices such as Alexa and Siri might not fool you into thinking you are talking to a person, there is technology out there that does. Google recently showcased a creepy sounding system for booking appointments that sounded quite convincing...
MIT has also developed a computer system that can transcribe words that the user verbalises internally but does not actually speak aloud. So you may not even need to use your voice!
Technologies such as these, along with the improved interpretation of sentiment, will be where all the action will be.
For example, depending on how I feel when I tell my AI assistant that I'm hungry will depend on what kind of place it sends me to. For example, if I am feeling hyped and hangry then it may send me to McDonalds. Alternatively, if I am feeling chilled out and mellow it may send me to a gourmet burger joint around the corner!
These applications of AI will change the role of marketing from the many channels that exist now into creating a single channel - you. Along with this will come different methods, tools and strategies - all with their own hype cycles - designed to literally get inside your mind.
To look at it in a different way, you’ll be marketing directly to AI bots which will then communicate with a person via voice or thought based on what it has learnt about that person.
However, where the issue of conceptual thought - the lack of understanding of intuition and human thought - halted the progress of Strong AI, the issue of privacy, data, ethics and security will remain the biggest hurdles to overcome for Narrow AI.
Some of the best AI solutions rely on listening to all your conversations, watching all of your facial movements, looking at your body language and sending all that data up into the cloud.
This will no doubt freak out a lot of people and raise concerns that maybe aren’t that far away from the dystopian view that the media portrays. So trust will also play a big part in the success of wide-stream adoption of AI in everyday life.
Most AI systems work by analysing data and finding patterns in that data to learn from in order to produce results. Obviously, they have no concept of ethics and the data can lead to biased results.
OpenAI recently developed a system called GPT2 which is so good at writing that it has been dubbed "deepfakes for text" and the creators have declined to release it due to fear of misuse. For example, being used to generate fake news!
As a side note, OpenAI is a nonprofit company backed by Elon Musk who is definitely responsible for a fair bit of AI scaremongering!
The next 5 years will be all about advancements in Narrow AI in order to keep investments flowing, possibly a slow down in those investments if businesses struggle or are slow to adopt AI.
There will be lots and lots more hype and audacious claims for AI and a battle between AI, ethics, security and privacy. It’s going to be an exciting and fascinating journey…