What is artificial intelligence?Posted on: May 24, 2022
by Ruth Brooks
Artificial intelligence, also known as AI, is the branch of computer science that simulates human intelligence in machines. AI computers are programmed to solve problems, make decisions, and perform tasks like a human mind might.
Despite popular, often outlandish depictions of AI in popular and science fiction – superhuman androids in The Terminator films, for example – most people interact with artificial intelligence every day, often without even realising it. Common applications for AI include:
- Customer service chatbots. Many of the online chatbots people interact with on business websites are powered by artificial intelligence. Common questions and key phrases can trigger automated real-time responses as the bot works to better understand the customer’s query and provide them with an answer or resolution.
- Digital assistants. Apple’s Siri and Amazon’s Alexa are popular examples of digital smart assistants or automated personal assistants. They’re a bit like an advanced version of a chatbot, pulling data from multiple sources to respond to specific questions and requests. They’re also continually learning to adapt to specific preferences and even make personalised predictions and suggestions.
- Entertainment recommendations. Every time Netflix suggests a new TV show or movie based on someone’s viewing history, or Spotify recommends a new playlist based on a person’s favourite artists, they’re using AI-powered technology. This technology determines what content a person might enjoy based on their interests, and even based on the habits of people with similar interests. The technology continues to learn, adapt, and evolve as it collects more information, data, and insight, generating content recommendations perfectly tailored to the person in question.
Artificial intelligence is also a powerful tool for businesses and employers across a diverse range of sectors, from finance to healthcare. It can be utilised for data analysis, supply chain or business process automation, marketing and social media activity, and more – and new uses and applications are being developed all the time.
When was artificial intelligence invented?
The history of artificial intelligence as we know it dates back to 1950, when mathematician Alan Turing published Computing Machinery and Intelligence. In it, Turing posed a question – can machines think? – and introduced what’s now known as the Turing test. Called the imitation game by Turing himself, the test is essentially a game where a person attempts to distinguish between a computer response and a human response.
It was just a few years later, in 1956, when computer scientist John McCarthy publicly coined the term artificial intelligence at the Dartmouth Summer Research Project on Artificial Intelligence at Dartmouth College. This workshop is considered to be the founding event of the AI field and AI research, and McCarthy is often referred to as the father of AI. However, this term is also sometimes applied to Turing as well as to MIT cognitive and computer scientist Marvin Minsky.
A few months later in 1956, the first AI software programme was developed, and by the mid-1960s, computer scientists had developed the precursor to the 21st century’s chatbots: ELIZA. ELIZA was one of the first natural language processing (NLP) programs and simulated conversation.
From there, all of the pieces of today’s AI technology started to come together – voice and speech recognition, facial recognition, backpropagation (which allows neural networks to learn from mistakes), image recognition, and machine learning.
Popular breakthroughs included:
- IBM Deep Blue winning the first single-game chess victory against world chess champion Garry Kasparov in the mid-1990s.
- IBM Watson defeating two former Jeopardy! champions in the early 2010s.
- DeepMind’s AlphaGo beating Lee Sedol, a world-champion Go player, in a five-game match in 2016.
How does artificial intelligence work?
Artificial intelligence systems are programmed to parse through huge data sets and use sophisticated algorithms to analyse that data. They can even learn from historic data to continually develop and improve – a process called machine learning.
An artificial intelligence system that completes specific tasks – for example, Siri, Alexa, and similar virtual assistants – is known as weak AI, or narrow AI. More advanced AI technology is called strong AI, or artificial general intelligence (AGI). AGI attempts to replicate the human brain and its cognitive abilities, using logic and mathematics to find answers and solutions, and even prove mathematical theorems.
Other areas of AI include:
- Deep learning. This subset of machine learning uses algorithms inspired by the human brain to learn from huge amounts of data, such as those found in big data. The deeper into the data the machine goes, the more it learns – and the more it can improve its outputs or outcomes.
- Computer vision. This technology is used by smartphones that unlock by recognising an individual’s face. It’s programmed to capture and interpret image and video information.
- Autonomous vehicles. Self-driving cars use artificial intelligence, along with sensors, cameras, radar, and various machine learning algorithms to operate without a human being at the wheel.
- Supervised learning. Also known as supervised machine learning, supervised learning computer systems use labelled data sets to effectively supervise and train algorithms into accurately classifying data or predicting outcomes.
- Artificial intelligence neurons and artificial neural networks (ANNs). ANNs are interconnected groups of nodes – or AI neurons – inspired by the neurons of the human brain. The artificial neurons are programmed to signal one another with the aim of working together to learn, recognise patterns, and predict data.
- Expert systems. An expert computer system emulates the decision-making and problem-solving abilities of human experts. They solve complex problems by accessing large data sets and by using if/then rules, rather than the standard procedural code used by other AI systems.
Current challenges in artificial intelligence
AI systems and technologies are powerful tools that continue to develop at a rapid pace. However, AI is not without its challenges. For example:
- it can be expensive to develop and implement bespoke AI products.
- AI and machine learning require huge amounts of computing power to store and analyse data, which has repercussions such as electricity costs and carbon emissions.
- there’s fear of the unknown. For instance, the thought of machine intelligence – or hypothetical superintelligence – can make some people concerned about potential job losses.
- an artificial intelligence skills gap means there’s a shortage of AI and machine learning experts to fill the growing number of vacancies in the field.
Help shape the future of AI
AI development shows no sign of slowing down. A recent study by SnapLogic found that 93% of UK and US organisations consider AI to be a business priority – but more than half acknowledged that they don’t employ the skilled AI talent to realise their AI strategies.
You can develop the expertise you need to succeed in the field with the 100% online MSc Computer Science at North Wales Management School. This flexible master’s degree includes a focus on machine learning, and was developed for ambitious professionals from a variety of backgrounds who want to launch – or boost – their career in computer science.