This version of the test environment has been built using a set of tile prefabs – each containing its own search nodes and walls. At this time some manual adjustment of wall placement is still required (deleting many, otherwise moving on a 0.25f grid). Construction of levels should now be much faster, and now allows some automation!
The white spheres represent the grid search node structure (1.0f spacing). The walls have been adjusted to match this levels layout.
I was able to utilize the MavMesh (once baked) to automatically detect any SearchNodes that cannot be reached, and flag them as blocked (black).
The next step will implementing the required search functionality.
Before commencing further work into the implementation of a suitable method of searching the environment, I decided to build a prototype tileset. The outer loop is built out of many individual tiles – but the NavMesh correctly correctly joins them together (with a few small glitches).
If built in such a way, the tiles would be Unity Prefabs – most importantly containing the required node structure to allow for a grid search. Each ’tile’ or prefab would contain its own container of nodes, therefore allowing a general sweep of each tile before accessing the nodes within.
Current construction does not allow me to fill in the room in the middle correctly – scaling the square tile would be possible, but this would also scale any node structure later included within. I will be looking either to design the tileset a little more carefully, or include a wider range of rectangular sections to fill the gaps.
Stands all setup – bring it on!
Source code – Python for Maya.
Following on from the first semesters Technical Art Applications, Scripting & Dynamics was left almost completely open in terms of our submission.
After focusing on the Volibear Rig, rather than the Python Tool – I decided to pursue a scripting heavy coursework.
My chosen area was crowd simulation – specifically I wanted to look at the interaction between herding mechanics and hunting. Ideally I wanted to look at two groups interactions (prey and pack), rather than single predator agents interacting with a single group of prey.
These diagrams are recreated from those shown by Craig Reynolds BOIDS, and show the three basic flocking rules.
I made these diagrams to demonstrate the idea of a pack consisting of centers and wings, of a flanking and driving style of hunting.
Its worth noting that although my proposal was accepted, I was warned about this being an ambitious project.
This blog kept as part of the coursework tells the story of this coursework.
This was a single semester project to have an initial look at Network Programming.
Source code – sfml 2.0, C++.
The game was created using the sfml 2.0 library – using both the graphics and its own networking libraries.
Using the graphics library was an absolute pleasure, I often remarked if was like being handed our PS2 framework – only fully featured and ready to go.
The networking library also proved pretty decent – matching up directly with what we had been taught with Sockets.
Although many of had attended previous years Dare To Be Digital – this would be the first time exhibiting at IndieFest.
The chosen game was our latest GDS project Hubble Bubble – with two Windows builds at our booth.
This was a great experience overall – and highlighted the importance of the tutorial for Hubble Bubble.
Source code – XAudio2, C++.
Allan Milne’s framework has not been included, though it remains referenced as appropriate.
This coursework was unique in a couple of ways:
firstly although graphical elements were permitted for debugging purposes – they were not allowed to form a part of the final submission.
secondly we were given the opportunity to use the lecturers own XAudio2 framework.
Unsure of quite how to go about making an audio only game, I eventually settled on the idea of using the Aliens motion tracker.
The idea or story was that your motions trackers display was broken – and the player would have to rely on its audio output only. During development I felt that leaving judging direction simply to the stereo left and right channels was too difficult and frustrating. Therefore I decided to exaggerate the player or listeners audio cone, only allowing full volume in the front 90 degrees, reducing to zero volume before reaching the rear 90 degree arc.
As per the movies – the contacts or signals pitch increases as their distance decreases. Finally I added some pulse rifle kill clips, and a game over style death clip. The games only controls were left and right rotation, with the goal being to turn towards the incoming contact before it gets too close.
In practice the game still proved a little hit and miss. As the movement was gradual and rotational – I could not think of any suitable audio to represent movement. Looking back I believe it would of been suitable to even further exaggerate the audio cone by eliminating the inner (full volume) cone altogether – so a user could achieve full volume by aligning directly to a signal.
For our Procedural Methods module, I decided to explore terrain height map generation – specifically using the Diamond-Square algorithm.
Source code – C++.
As this was based off last semesters Shader Programming submission, again the Rastertek code is not present.
Rather than start off by using code online, I decided to approach the implementation by researching the algorithm. Though I certainly made a few mistakes along the way, and its also certainly not the most correct or efficient method (if there is such a thing). This proved to be an interesting piece of coursework.
Sadly I believe my own diagram is slightly wrong…
2 and 4 of the 2nd iteration of the diamond stage are the wrong way round.
One of the techniques introduced in the previous semesters Shader Programming was Post-Processing. I’d guess more than a few did not include any for that submission – as it was a requirement for this one.
I set my sights on implementing a depth based blur effect, and I believe I got pretty close to getting the depth information stored as a render to texture.
In the end this Photoshop’d image was a reminder of the failed goal.
Instead I opted to modify the existing blur process – by adding a radial component. This was achieved rather sneakily by abusing the existing texture coordinates.
The blur effect off and on.
Even considering I have opted to skip the down and up scaling stages of the blur method (and therefore rendering the whole scene again twice), the impact on performance was pretty severe.
It had been two years since Media Production for Games and my initial taste of Maya.
This module taught the techniques used to create a rig much like the ‘Jimmy’ rig we had animated for that years submission.
For this submission I decided to have a go at rigging Volibear – looking at the opportunities involved in getting the skinning right with all the armour he’s wearing.
Unfortunately I spent rather too long getting the skeleton and IK’s behaving correctly, leaving the skinning too late. This resulted in some pretty substantial mesh tearing – particularly with the upper neck armour plates.
However the skills gained here would later prove valuable for the group project – Titan1um.
Acting almost as a follow up the previous years Graphics Programming – this year we would be looking at Shader Programming in DirectX.
Source code – C++.
This module was taught using the Rastertek tutorials for DirectX 11 – and as that comprises the bulk of the code I have only included the reworked camera along with the pixel and vertex shaders.
Not to bash the tutorials too much – there were only intended to introduce the techniques. As it turned out the tutorials also tended to work under very specific circumstances, so the bulk of the learning for this submission was in the creation of the scene.
The submission slides again explain my areas of focus – the terrain and ocean shaders.