Final Test Environment

During the final integration process, it was necessary to finalise the details of the test environment. Structurally mostly unchanged since the conversion to Unity 5, the final details remaining represented exactly what the A.I. would be capable of ‘seeing and hearing’.

As a combination of the previously pitched ‘static’ sound emitters, I decided to place four static ‘decoys’. The player can only activate one at any time, and only when with a certain proximity – the button to activate is shown over the decoy if it can be turned on. This will represent both an audio and visual contact to the A.I. – continuously broadcasting an audio contact, and a visual contact when within the A.I.’s line-of-sight (now represented by blue arcs). The decoy stays active for a set amount of time, and is visually represented as a pair of rotating orange spotlights.

Decoy1 Decoy2

From the beginning the idea of a player launched contact had been planned. This has been implemented in the form of a green flare – launched directly from the player in the chosen direction. The range is fairly short, the player can throw over the ‘low’ (lighter grey obstacles), but not through anything higher. Representing a player-like decoy, the flare also both functions as a constant audio and visual contact – lasting a set amount of time, and only one allowed active at a time. It however is thrown, and rolls – as such is capable of movement like the player.

Flare

The final added piece of functionality was to allow the player to crouch – this will allow the player to both break LOS using low obstacles, and move silently (but slower than running). The idea being to allow some room for a player to duck behind an obstacle and break contact with the A.I. At this time I’m undecided whether I should allow the player to throw flares while concealed behind a low obstacle. It remains at this time, game play balance was never a focus for this project – just the possibilities of messing with the A.I.

Crouching1 Crouching2

Adaptive Resonance Theory – ART2

After a rather significant reworking the modified ART1 network now appears to function correctly – I call it ART2 but in reality this network is far closer to its binary cousin.

Importantly the test data used represents actual vector3 positions from within my test environment – 12 floats representing 4 vector3’s – the first two allocated for visual contacts, the second two for audio contacts.

Vigilance Threshold = 0

As expected all six patterns are unique – even the first two pairs have slight differences (this was deliberate for the test data).

0Vigiliance

Vigilance Threshold = 1

The magnitude of the difference vector of the first two pairs was below 1 – these have now been accepted as members of that class and the initial node retrained (with the small differences its hard to see).

1Vigilance

Vigilance Threshold = 28

It takes a rather high Vigilance of 28 for the final third pair to be considered a match (the x component of the second vector3’s sign was different). In practice a Vigilance this high would not be useful for my application.

28Vigilance

The losses:

– The concept of using the dot product no longer works – vigilance is now measured by the magnitude of a difference vector between the input and the exemplar. Without the dot product this involves bypassing the normal ART architecture when a untrained or blank node is selected. In this implementation the difference vector is forced to 0’s (zero magnitude) to force a match.

– The feed-forward weight scaling equation has been altered to: value / ( 1 + the magnitude of the input vector ). This allows for negative scaling but this seems to still function correctly – same signs will result in a higher weighting, differing will lower it.