The basis of neural networks: Cracking the code
A subcategory of artificial intelligence, neural networks are AI models with vast and groundbreaking potential. From powering search engines and voice recognition on our smartphones to predicting diseases from medical imaging, neural networks have proven to be incredibly versatile and powerful tools already, but that is nothing compared to what they could achieve.
While these models have become a staple in the discourse surrounding AI as a whole, their significance – and potential – merit specific recognition. But what exactly are neural networks and how do they work? In short, they are simply algorithms that perceive, outline and uncover patterns in data through a process inspired by the biological brain. Fact or fiction? This pioneering technology is pushing the boundaries of what we thought possible.
In this article, we will demystify the basics of neural networks and dive into how they are revolutionizing our relationship with technology, and our understanding of it.
Table of contents
What is a neural network?
Neural networks, also known as artificial neural networks or simulated neural networks, are a type of machine-learning algorithm inspired by the structure and functioning of the biological brain. They are composed of interconnected nodes, known as neurons. But what is a neuron? Very simply put, the neurons found in neural networks AI are simple mathematical functions that process the information that comes in (through the edge of networks, like synapses) and output a numerical value (depending on whether the neuron was activated).
In AI, neurons are pieces of software which work together to process and analyse complex data. Each neuron receives input from the previous layer, applies a mathematical function to that input and passes the result to the next layer. The choice and tuning of these mathematical functions are the main challenges to designing a neural network since all of its performance relies on having the correct setup for the desired output. This is done through an automatic process called training.
Sign up for email updates
Stay updated on artificial intelligence and related standards!
How your data will be used
Please see ISO privacy notice. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
What are neural networks used for?
Once just a concept, neural networks are now revolutionizing a whole range of industries. Their versatility and power have led to a wide range of practical applications that are transforming the way we use and interact with technology. For example, they can analyse large amounts of financial data and make predictions about stock market trends, creditworthiness and fraud detection. This has the potential to greatly improve investment strategies and risk management.
In the healthcare industry, neural networks are used in disease diagnosis, drug discovery and personalized medicine. Health professionals can use artificial neural networks to help analyse medical images, patient records and genomic data to identify patterns and make predictions, leading to more accurate diagnoses and tailored treatment plans. Neural networks can also contribute to faster drug development by enabling the analysis of large-scale datasets.
Another talked-about application of neural networks is in self-driving vehicles – specifically with regards to object detection, navigation and real-time decision making – but also in the realm of user experience.
Across the service industry, chatbots powered by AI benefit greatly from neural networks which power entity recognition, natural language processing and sentiment analysis. Recommendation engines – such as those which suggest the next binge-worthy show we might like to watch – rely on pattern recognition and prediction capabilities.
How do neural networks work?
What makes neural networks particularly fascinating is that they are inspired by the biological brain. They consist of a few key components:
- Neurons are the fundamental units of simulated neural networks. They receive input signals, process them using activation functions and produce output signals. These artificial neurons are interconnected in layers to form a network. The first layer, called the input layer, receives the initial data, while the last layer, known as the output layer, produces the end result. By taking inputs from the previous layer and passing outputs to the next layer, each neuron enables information flow throughout the network. There can be one or more hidden layers in the network, where calculations take place.
- Weights are the connections between neurons. They determine the strength of the signal being passed from one neuron to another. The weights are adjusted during the training phase of the neural network, allowing the model to learn and improve its performance.
- Activation functions introduce non-linearity into neural networks and help them model more complex relationships in data. They determine whether a neuron should be activated or not based on its inputs. These functions play a crucial role in shaping the behaviour and learning capabilities of neural networks.
The interplay between these key elements is what defines an artificial neural network.
Feedforward, or forward propagation, is the backbone of how neural networks work, enabling them to make predictions and generate outputs. At its core, forward propagation is a simple yet powerful process. It involves passing the input data through the layers of interconnected neurons, with each neuron applying the activation function to its weighted sum of inputs.
Backpropagation is often equally important. This is the process by which a neural network adjusts its weights in response to feedback received during training. It works by propagating the error from the output layer back through the network, allowing each neuron to adjust its weights accordingly. By iteratively adjusting the weights based on the feedback, the network can gradually improve its accuracy in making predictions and generating desired outputs.
Understanding the different types of neural networks
There are various types of neural network models, each designed to excel in specific tasks, such as image recognition, natural language processing or time series analysis. Let’s look at the main types of artificial neural networks:
- Feedforward neural networks (FNNs) are a type of artificial neural network where the information flows only in one direction, from the input layer to the output layer. They are commonly used for tasks such as pattern recognition, classification and regression.
- Convolutional neural networks (CNNs) are mostly feedforward networks designed to process grid-like data such as images or videos. The use of a convolutional neural network in deep learning relies on the application of filters to local regions of the input data, allowing it to automatically learn hierarchical representations of visual features. CNNs have become instrumental in image classification, object detection and computer vision tasks.
- Recurrent neural networks (RNNs) are designed to process sequential data by having feedback connections. This allows them to retain information about previous inputs and use it to make predictions or decisions. RNNs are widely used in tasks like language modelling, speech recognition and machine translation.
- Residual neural networks (ResNet) are a special kind of feedforward that allows the network to “skip” over certain layers, especially if they do not contribute to a better result. They are widely used, for example, in semantic segmentation tasks. ResNet are a simple yet effective technique to successfully train very deep neural networks.
Beyond these few examples, there exist more varieties of neural network models which have a multitude of applications such as generating synthetic data, data visualization and feature extraction and simulation. Radial basis function neural networks, for example, are often used for function approximation and classification tasks, particularly in cases where the relationships between inputs and outputs are complex or non-linear. Or a graph neural network could be useful when analysing data held in graphs.
How are neural networks trained?
Training a feedforward neural network involves adjusting the weights associated with each connection between neurons. This requires datasets to serve as examples for the network to learn patterns, correlations and make accurate predictions. The quality and diversity of training data play a vital role in the network’s ability to generalize and perform well on unseen data.
It is worth noting that training a neural network model has inherent constraints:
- Data requirements: Training a neural network requires large amounts of labelled data, which may not always be readily available.
- Interpretability: Neural networks are often referred to as “black boxes” because of the high dimensionality (data complexity) in which their computations take place, making it challenging to interpret the reasoning behind their decisions.
- Computational resources: Training neural networks AI can be computationally intensive, requiring significant initial computational resources and time. Further resources during deployment are also needed.
- Overfitting and underfitting: Neural networks can be prone to “overfitting”, where they become too specific to the training data and may not generalize well to new, unseen data. Conversely, when a model fails to capture important distinctions and patterns in the data, leading to poor performance even on training data, this is called “underfitting”.
Pros and cons of neural networks
Regardless of the architecture of neural networks, their ability to learn patterns, adapt to change, perform multiple actions simultaneously and process vast amounts of unorganized data makes them a real game-changer. The main benefits of neural networks are:
- Higher accuracy: Neural networks can detect complex patterns that may not be apparent to human analysts or rule-based systems.
- Adaptability: Neural networks can adapt to changing patterns by continuously learning from new data.
- Scalability: Neural networks can handle large volumes of data efficiently, making them ideally suited to real-time processing.
But while neural networks are undoubtedly powerful tools that have transformed various industries, like any technology, they come with their own challenges and limitations. Understanding these challenges is essential for maximizing their potential.
First, neural networks require datasets to learn and make accurate predictions. The quality and representativeness of the data are crucial for the network’s performance. Acquiring and preparing these datasets can be time-consuming and resource-intensive.
Second, designing and optimizing neural networks requires expertise and computational power. Choosing the right architecture, adjusting hyperparameters and training the model can be a complex and iterative process. This complexity can make it difficult even for experts to implement and apply neural networks effectively.
Neural networks also face limitations in terms of interpretability. Because of their complexity, it can be challenging to understand and explain the decision-making process of a neural network. This lack of interpretability raises concerns in critical applications, such as healthcare and finance, where transparency and accountability are essential.
Towards robust AI networks
Assessing the robustness of neural networks is crucial to ensuring that AI systems can maintain the same high level of performance under any conditions. Neural network systems pose specific challenges as they are both hard to explain and prone to unexpected behaviour due to their non-linear nature. This calls for alternative approaches, including International Standards.
The ISO/IEC 24029 series takes a holistic approach by addressing both ethical concerns and emerging technology requirements to enable the responsible adoption of neural networks. It consists so far of a general overview and a methodology for the use of formal methods to assess robustness properties of neural networks. This important series, still under development, will serve as the foundation for establishing global trust in AI systems worldwide.
- ISO/IEC TR 24029-1:2021Robustness of neural networks – Part 1: Overview
- ISO/IEC 24029-2:2023Robustness of neural networks – Part 2: Methodology for the use of formal methods
Are neural networks the future of artificial intelligence?
While advances in neural networks can provide endless opportunities for creative problem-solving, the technology must be developed with responsible, thoughtful and forward-facing guardrails. Like any artificial intelligence, neural networks must advance along the lines of ethical and responsible thinking so that they can support human progress with minimal risk.
International Standards can help researchers, regulators, users and other stakeholders align on what is needed, how to track progress, and best practices. Although the benefits we are already seeing – from our hospitals to our home comforts – are clear, it is vital that we make sure considerations of safety, privacy and transparency are built into the development of this technology. Only with a common language, shared metrics and a unified vision can we maximize the potential of neural networks for the greater good.