Adaptive Resonance Theory – ART 1.5ish…

It turns out descriptions like ‘subtle changes’ or ‘more complex set-up, simpler equations’ are somewhat lacking when comparing ART1 and ART2 networks. The changes involved are significant (mostly within the F1/Input/Comparison layer), with some 15 equations including ODE’s.

I think this shows that my current ART1 network is more than likely significantly less complex than the true model – as there is only one major equation (the feed-forward weight scaling equation) along with the dot product.

As I believed the ‘true’ ART2 architecture was far too complex, I have been pursuing modifying my existing ART1 network to accept real values. Once completed I will detail the required ‘modifications’ (hacks) required, but so far the network will accept real inputs – there is no comparison at this point (0 vigilance) – all 5 patterns are classed as new exemplars.

Even this ‘simple’ step required significant modification of the ART1 implementation – also perhaps revealing some errors within its logic.


This test data has been designed to have two sets of similar data (the first two pairs) and a final pair with some more significant differences – to test the effectiveness of the vigilance test.

In the above image:

– white nodes represent 0’s – as the input vector has fixed length, any gaps must be filled

– blue nodes have negative values (both feedback and exemplar)

– red nodes have positive values

For non-zero nodes, the size of the sphere is the value of the feed-forward weights trained to that node – in this case they are the normalised values of the exemplar.

Adaptive Resonance Theory – ART1 Network

These screenshots show a visual representation of the output from a trained ART1 network.

Creating a functioning ART1 network is the first step towards implementing a suitable variation within my game environment, this will likely be a modified version of an ART2 network. The key difference between ART1 & ART2 is that ART1 only accepts a binary input, where ART2 functions with real numbers.

These initial test are based around a fixed set of binary input data:


The networks outputs match those ran at vigilance thresholds of 0.3 and 0.8 – with white nodes being inactive (0), red nodes being active (1) and their size is equal to their feedforward weighting.

0.3Vigilance 0.3TestResults0.8Vigilance 0.8TestResultsThis version of the network already features some improvements over the standard model – primarily it features an extendable output node structure (recognition field). So the only two variables required for operation are the size of the test data arrays (number of binary digits – which is fixed thereafter), and the vigilance threshold value.

The recognition field is initialised with a single output node (neuron), and after this ’empty’ or ‘default’ node is trained with an exemplar pattern  – a new output node is added to the field.

This limits the network by the increasing size of the recognition field and increasing time to resonate between it and the comparison field.

Currently the comparison (input) field is fixed after the network is initialised. As the size of this field is used within the feedforward weight scaling equation – it looks like this possible cannot change.

Unity 5 Upgrade – Shadows

Work still progresses slowly on my ART implementation (more posts will follow), but I decided to quickly look at upgrading the project as Unity 5 released at GDC. One or two minor changes were required with the NavmeshAgent, but otherwise a simple transition.


Previously the option to render shadows had been restricted to Unity Pro licences only – no longer with Unity 5!


As I do not intend to further the graphics of this application, the addition of shadows and new light sources really enhances the scene. The red light here pulses like warning or alarm, while the white light flickers on and off.

The main attraction however was the blue cone light – which nicely represents the A.I.’s cone of vision, with areas in shadow correctly showing blind spots behind obstacles.