The following blog post, unless otherwise noted, was written by a member of Gamasutras community.
The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company.
AI and Games is a crowdfunded video series about research and applications of artificial intelligence in video games. If you like my these write-ups, please consider supporting the show over on Patreon for early-access and behind-the-scenes updates. If you’d like to work with us on your own games projects, please check out our consulting services provided at AI and Games.
The 2016 release of IO Interactive’s Hitman brought the franchise back to its roots, of creating rich and interesting scenarios in which Agent 47 needs to eliminate his targets, often in improvised and impractical ways. To achieve this, the game has a myriad of AI systems under the hood, and this is what we’re here to explore. We’re going to dive deep into the suite of AI systems built to support the current generation of Hitman: creating responsive NPCs, bodyguards, crowd systems, AI-driven animation and more as players once again become the sandbox assassin.
Players in Hitman are tasked with killing horrible and unscrupulous individuals that the world won’t really miss. But in each instance it’s more about the unique stories that players can craft for themselves as you eliminate targets in nail biting or simply hilarious ways. Hitman is laced with systems that enable you to experiment, improvise and react to changes in the larger story that’s playing out and the progression being made towards eliminating your target. Poison drinks as a barman, snap necks as a masseur, scare someone as a plague doctor, even gain the trust of a recording team by playing the drums. The game provides a range of fun and often bespoke methods to zone in and remove your quarry in increasingly more complicated and expansive scenarios. Whether you’re walking the catwalk of Paris fashion week, to sneaking through political upheaval in Marrakesh, your targets are found in dense, lively and complicated environments, with personal security, local law enforcement not to mention the general public proving to be an obstacle. But this is where the game becomes really interesting, as players can blend into their surroundings, assume disguises and manipulate the world around them all in an effort to get one stop closer to the target, deliver some karmic retribution and subsequently escape as the world around them is unaware of their presence.
Hitman is at it’s most satisfying when your preparation and planning come to a head and are executed perfectly and it requires numerous AI systems to pull it off. In this video we’re going to focus on four key systems, how they work and what they do to help build the Hitman experience:
- A situation-driven behaviour tree system, where over 300 characters in a given level can make decisions about what to do based on what they know is happening around them and react to circumstances around them and often play directly into the players hands.
- A responsive and pragmatic bodyguard and security system for ensuring targets become all that more difficult to reach and will be rushed to secure defensive locations in the event they become compromised.
- A powerful crowd behaviour management and simulation system that enables for over 1000 crowd NPCs to be active in a given level but can still react to actions the player executes.
- And lastly a finely tuned animation framework that ensures all of these characters look and behave as realistic as possible.
I’m going to explain how all of this works as well as some of the finer design tweaks adopted for both the 2016 release as well as the 2018 sequel. However, before in order to talk about the AI of either game, I need to take a look at where the current AI toolchain started and the core principles of how it is designed and to do that, we’re going to take a look at Hitman Absolution.
Released in 2012, Hitman Absolution was a more streamlined and linear interpretation of the franchise’s formula that – while not as critically acclaimed as the more recent entries – is the basis of the core technology and tools used in both the 2016 and 2018 releases. One of the big reasons for this, is that Absolution was the first entry in the franchise since 2006’s Hitman: Blood Money, given the intermittent years at IO Interactive were spent on the Kane & Lynch titles as well as Mini Ninjas. Hence Absolution was building new tools and systems that better reflected the teams ambitions for the coming years. The core architecture of Hitman‘s non-player characters AI, as well as the systems for crowds and animation were all built for Absolution. Then embellished upon in both the 2016 and 2018 Hitman releases.
Hitman‘s AI is built to cater to the variety of playstyles that Absolution and subsequent games have afforded. This means the system supports you approaching using stealth tactics, wearing disguises and blending into your environments – referred to as ‘social stealth’ by the developers – or taking a more aggressive approach and going in guns blazing.
But in addition, it’s built to have NPCs living their lives as expected, often with some sort of established routines of behaviours: they’re serving drinks, applying makeup, guarding entryways, cooking dinner, whatever is needed to fit the fiction of the location. These routines need to be predictable and reliable – given players rely on this sort of predictability in stealth games – but can be interrupt as they investigate the many, many, many items and objects that players can manipulate in the world. Whether you’re turning off a motor, causing a sink to overflow, puncturing a fuel tank or playing a video tape. Depending on the type of interaction, it will either draw characters to them with immediate effect, given they interrupt the characters activity, or will a more delay interruption to specific characters routines such that they will see what you’ve done sooner rather than later. This is useful for some of the more specific actions that lure targets into locations, given you don’t want to have to wait 20 minutes for the payoff, but at the same time if they run over immediately after you triggering the action, it will seem stilted and forced.
Hitman‘s core AI is reliant on behaviour trees: arguably the most common format for modern game AI behaviour management. Behaviour trees take a top-down approach to structuring behaviour and how characters respond to specific circumstances. With behaviours selected often due to specific conditions in the world and resulting in single or multi-action sequences as they respond to what the player and other game systems are doing.
But before the behav iour tree can make a decision, each character has to know what’s happening in the world around them. To do that, each active AI character in the world has a knowledge base. A knowledge base is comprised of two sets of data: public knowledge, which is information that is made public about a given item or character as well as information that reflects its state. But also there’s private knowledge that a character will maintain about last known positions of objects or characters and retains a history of knowledge about specific items. Hence a character will notice when items in proximity go missing or are tampered with and similarly won’t keep trying to find or interact with characters that are either missing or dead.
The knowledge base is accumulated by taking all the information happening in the game world at a given point in time and distilling it down via both sensors and services. Sensors allow characters to update their personal data and react to changes as they see and hear things in local proximity. Meanwhile services update the shared knowledge base. Three of the most common services are the Disguise, Deadbody and Hitman services. The Disguise service is used when Agent 47 is wearing a costume and helps characters to know whether that outfit will look suspicious to them or not. Meanwhile the dead body service helps characters in local proximity understand that a dead body has been found in local proximity recently. Lastly, the Hitman service allows for common knowledge of Agent 47 to be shared, such as whether the player is compromised and where in the world the player was last spotted. What’s fun about the sensors and services is that they provide a fun balance of contextual knowledge, but also leave gaps for players to exploit. The sensors provide immediate information about the world, but can be interrupted by breaking line of sight or distracting them. Meanwhile, services are not omnipotent and only update periodically. Hence a lot of information from the services can become out of date pretty quickly and that leaves scope for the player to manipulate characters to suit their needs.
But now that the AI has all of this information, it needs to make decisions about how to react. Each AI character can enable one of a set of goals that drive the decisions they’re about to make. With these goals selected, it will then use the behaviour tree to make a decision about how to behave next. I still haven’t explained that yet, but I’m getting to it.
To diversify the gameplay, the game breaks down AI-driven NPCs into two main types: Civilians and Guards. In each instance, it impacts the information stored in their knowledge base and the types of goals they can assign themselves.
First up there’s Civilians. These characters are typically passive in nature, but will spot more extreme behaviours as suspicious or alarming. This can activate goals such as confronting the player and alerting guards but it doens’t allow them to go into combat. Two great example of this can be seen in the Paris fashion show level and the Bangkok hotel in the 2016 Hitman where civilians with different knowledge base profiles result in different behaviour. The guests are a lot more passive but will react to your behaviour if it proves too excessive, while staff and the entourage surrounding your targets are much more likely to react to your shenangans and confront you.
Secondly there are guards. These are naturally the players main adversary and will engage in combat with you as well as respond to more extreme circumstances such as the discovery of dead or unconscious bodies as well as weapons and other sensitive objects being found lying out in the open world. As mentioned already, Hitman 2016 introduces bodyguard AI that actively follow and guard a VIP as they move around an environment, while still leaving them free to explore the world themselves. This requires an entire separate system to operate and I’m explore that later in this video.
Now given all this information and the goals that have been activated. The behaviour tree system is built to have characters do one of two things: they either execute behaviours on their own or they join ‘situations’, where behaviours are coordinated by a group. A behaviour is one or more actions, ranging from turning to the player to interact or talk with them, moving to locations in the world, entering combat or interacting with objects the player might have manipulated. You’ll notice playing the game that a lot of the Targets will execute specific behaviours once the player has manipulated the world in a particular way. This is because this aligns with specific goals the Targets should complete once certain conditions are met – meaning that they deliberately put themselves in precarious positions for you to kill them.
Meanwhile situations often arise when events occur in the world that have a larger impact, such as when dead bodies are discovered, the player is compromised or the target has realised they’re under attack and try to flee. In these cases, characters will recognise what’s happening around them and join a situation. Meanwhile the situation dictates who does what and they all play their part in the fiction. So for example in the event you open fire on the crowd at the fashion show, the guests and staff will often flee and call for help, while the guards will hunt you down and open fire.
The behaviour tree system was first devised in Absolution and has since been heavily optimised and improved for both the 2016 and 2018 releases. It utilises a level-of-detail or LODding system. Meaning that while there can be over 300 active behaviour-tree-driven AI in a given level, the ones farthest away from the player update their behaviour less frequently as the CPU resource used for updating AI prioritises those closest to you. In addition, the animations of these distant AI are either dialled down in quality or turned off entirely. As mentioned already, the vast majority of the characters you see are controlled by the crowd AI system, but if necessary any character in a crowd can – depending on the current situation in the game – be possessed by a behaviour tree AI, effectively upgrading them a to fully responsive and coordinated character.
Now before we get into the bespoke systems built for the 2016 release, let’s tackle the second core system built in Absolution: the crowd AI framework. When first built for Absolution, it could support up to 1200 agents in a single crowd, with up to 500 of them visible at the same time. It’s not something I’ve talked about before on the show, but crowd systems are really difficult to get right. Crowds are often comprised of lots of dummy AI, that is nowhere near as smart as your more regular contemporaries. A big reason for that there’s just so many of them, but also, it makes sense given they’re often just adhering to a crowd mentality, moving through a convoluted space to their ultimate destination. But in Hitman, the goal was to try and blur the line between the crowds and the behaviour-tree AI as much as possible.
Characters in crowds are placed either as individuals that are able to walk around, potentially stop and acknowledge things in the space, but there are also groups that designers have a lot more control over. A group is really useful if you have a particular activity you want a bunch of characters doing in a crowd, so as looking at a point of interest but also standing in a particular shape such as in a larger shape or configuration. Each individual character runs a system that’s akin to a simple Finite State Machine, with only three main states of behaviour: standing idle, walking and what’s called ‘pending walk’, this is useful in situations where a character wants to be moving, but the congestion of the crowd has stopped them. So while they’re stuck in the crowd, they’re making decisions about a better direction they can take and waiting for an opportunity in the crowd that would enable them to transition back to the walk state. This is actually a lot more complicated than I’m making it out to be, given the characters need to be able to tweak their walking speeds as they move around. Hence each character is able to set preferred and max speeds and this is used to help keep the flow of the crowd and is even more relevant when a panic breaks out and people are running around everywhere.
Now one of the real risks of having this many agents in a crowd system is them polling a navigation mesh for paths to take or even checking the validity of being able to reach a given destination. Navigations meshes are what are typically used in 3D games to enable characters to move around environments. IO Realised this was pretty critical and as a result there’s a 2D reasoning grid that sits atop the navigation mesh that tells an AI character whether a given space is walkable, whether it should be avoided given it’s a security region, exit points for NPCs but also areas that are cordoned off given a panic has broken out in the crowd because Agent 47 is causing trouble and it would break the immersion if the character just waltzed through it without caring. Both the crowd and behaviour tree AI have their own reasoning grids, with the latter providing a fast and effective means to conduct sight tests, find valid locations near points of interest and identify tactical locations relative to a given character all without overloading the nav mesh itself.
But the real challenges come when you introduce Agent 47 into the mix, given the crowd is going to around and do their own thing, but when the player starts causing trouble, then the AI systems need to be able to react in kind. Characters immediately around the player should look panicked and potentially flee, but this should have a cumulative effect across the crowd and force movement across the space as everyone in proximity begins to run from the source of the commotion. This is addressed using two systems called behaviour zones and panic flows.
So first up, any action in the world that should influence the crowd sends out one or more behaviour zones on the environment. A behaviour zone sends out a pulse within a certain radius and angle relative to the source of the event that dictates how every crowd AI that interacts with it should respond. Hence if fireworks are going off then all characters within a certain radius with turn to face the point of interest to watch them explode. Meanwhile if the player pulls out a gun and aim it at an innocent bystander, it sets off three behaviour zones:
- One that makes those in the line of fire go prone.
- One that makes character in immediate proximity of the player panic.
- Lastly everyone within a certain range of the player becomes aware of the situation and enters a heightened state of alertness.
As you move around the space with the gun drawn, those behaviour zones will continue to pulse out from the player and force other crowd AI you’re moving towards to begin to react. There are a variety of responses a crowd AI can be told to execute by a behaviour zone: either looking at a point of interest, avoiding the source of the zone, being alerted to it, being scared by it, or going prone. In the event a crowd AI is caught in between multiple behaviour zones, they give priority to the action that will move impact their mood the most. Also according to my research, they never calm down once you set them off, so if you upset a character, you can only make them worse and never improve the situation. You’re a just a really negative influence on people.
These behaviour zones are tweaked by designers on a per level basis, given they’re designed to have characters react to events happening in the world, but they need to make sense in context. Hence for example if a fight breaks out say in the bar at the fashion week in Hitman 2016, that’s going to seem out of place and the crowd will react to it quite negatively. Meanwhile if you kick off trouble in the Vixen Club in Hitman Absolution, it’s not treated as seriously. Because, well…. men are pigs and strip bars are not exactly renowned for being classy institutions.
But now that you’ve causes a ruccus, the crowd wants to get the hell out of your way and that’s where the panic flows kick in. Panic flows are built on the reasoning grid I mentioned earlier, where each exit once placed by designers has each cell in the map calculate the shortest path to a given exit using a variation of Dijkstra’s algorithm. These paths towards each exit are the panic flows and a crowd AI will select the best flow to follow in the event of an emergency. These flows are calculated for all exits in a given space, allowing for crowd AI to dynamically shift to another exit in the event there’s a bottleneck, but also helps address situations where the player decides to block the nearest exit.
With the behaviour tree AI and crowd AI systems being improved on for Hitman 2016, there was still one major feature to be introduced: the VIP and Bodyguard system. Many of the high-value targets in Hitman have an entourage of one or more bodyguards that add an extra layer of frustration for players to overcome. Sure, each level is designed such that you can find ways around them, often by having them move into compromising positions, or by wearing costumes that allow you to slip past unnoticed. But in order to add this new layer of complexity, it required an entirely new system to be built into the established AI framework.
The real design problem that needed to be addressed here was the bodyguards need to have a relationship to the VIP. Right now AI characters don’t really know that each other exists. The crowd AI only knows that something is blocking their movement in the world, while the behaviour tree AI never really knows what other characters are doing. As mentioned earlier, the situation system is used to make them appear coordinated and react to things happening in the world, but they’re not willingly joining other characters to be a part of that situation, nor are they communicating to each other – given they rely on their sensors and services. They simply make themselves available to be added to a situation and a separate system is selecting them based on their local position and other criteria. But in this instance, you need to have that relationship between the two AI character types. Bodyguards should always be following the VIP and check in on them if they’ve not been seen for a while. Meanwhile a lot of the systemic responses to gameplay systems shouldn’t be handled by the VIP (given they’re y’know, a very important person) and instead command their bodyguards to do it for them.
This required entirely new situations for escorting and evacuating VIPS to be built into the main AI system, but it has very specific design rules that dictate how it should work. To get it running, IO designed two special character types. VIPs are a variant of the civilian AI who react to situations slightly differently, including a new ability to tell other people what to do. Meanwhile the bodyguards are a variant of the guard AI that run a slightly modified behaviour set and actively pay attention to their actively assigned VIP.
First up, the system enables a situation known as Ambient VIP Escort, where bodyguards will follow a VIP around the environment. The bodyguards use the navmesh and pathfinding systems to keep within proximity and follow the character around the space. In the event the VIP decides to stop, then the bodyguards default to idle behaviours and wait until they need to follow again.
However, this means the VIP’s can’t get any piece and quiet. Naturally a VIP might want to be left alone to conduct business, which also helps you as the player, given it would leave the target open for you to do your dirty work. Hence VIPs can tell bodyguards not to follow them into certain locations and level designers leave mark-up on the map that tells bodyguards where to stand in the event they need to wait around.
But this exposes a new vulnerability, in that a VIP could become lost or the bodyguards just stand around and never bother to check their bosses are still alive. That kind of defeats their purpose. So another situation, the Escort-Search behaviour was added to the game. In the event the game recognises the VIP cannot continue their patrolling around the map (because they’ve been held up, or they’re unconscious or most likely, dead), then it kicks off a timer. Once it reaches a certain limit, the closest guard will investigate the last known location of the VIP, and in the event they don’t find them, other bodyguards will join in and fan out the search to a larger pool of nearby locations.
In the event they find the VIP and everything is fine, then things resume as normal . If the VIP is dead, then an active search for the player kicks in, but if the VIP is found unconscious then they execute the last Evacuate VIP situation, the last new behaviour added for the 2016 game.
The evacuation situation is build to have bodyguards move VIPs into safe locations. It can be triggered either by the VIP becoming aware of an attempt on their life, gun fire, explosions and any other alarms that can be triggered in proximity. An evacuate situation has nearby guards form up around the VIP, with the group then moving to a safe room. Safe rooms are locations marked up by designers to be a good location for the VIP to hide out. When a group is going to move towards a safe room then the system calculates a thread influence space over the reasoning grid, preferring paths that have the least amount of congestion, which is practical given you don’t want the crowds causing a bottleneck. The paths calculated are deliberately built so they avoid hugging corners, given we’ve got numerous bodyguards all protecting the same VIP, so it doesn’t want to have them bunch up and walk into each other. In the event no safe room can be found or a path can be built, they instead opt to the best evacuation route that’s available as a holding position until a new opportunity presents itself.
In any case, the guards form up on the bodyguard to initiate the evacuation. This requires them to assume a formation, there is a separate formation system that assigns characters to specific slots and prioritises the order in which they join up, followed by a another system that maintains movement speeds and keeps the formation in place as best it can, using the VIP as the anchor point. However, in the event the player is giving chase, bodguards can break off from the formation in order to stay and fight.
On arrival, all guards assigned to the situation then fan out and execute a defensive behaviour to keep the VIP protected. If the player then turns up and compromises the safe room, then the system will cause the evacuation to kick off again, with NPCs forming up once more and a new room and evacuation path being calculated.
The last major AI component of Hitman is one that players typically see most: the animation systems. Animations in games are critical in telegraphing the decisions being made by a given character and this is even more critical when working in Stealth games, where the player needs to be able to see what they’re thinking – or at least an approximation of it – given that will influence your overall strategy.
Traditionally, animation is handled by a controller, whereby the system knows that in certain contexts, a character transitions from an idle, to walking and running. This typically requires programmers and animators to build these systems not only so each character knows which animation it should be executing, but how to transition between animations smoothly as well as deal with the myriad of circumstances where you need to blend multiple animations together in order to achieve a specific desired effect. If it doesn’t work, the characters look stilted and awkward, but getting it right for each character is a lot of work and only becomes increasingly more complicated as characters can find themselves in more unique circumstances and as more animations are added to the system. This is an issue id Software resolved by building an AI-driven animation controller in DOOM 2016 that minimised the number of animations made for each character and then calculated how best to manage them depending on the current context. But the thing that’s unique for Hitman is that IO Interactive can’t afford to have over a thousand complex animation controller active in a scene at once. So it needs something cheaper and ideally calculated in advance to minimise the CPU overhead.
As a result, IO Interactive explored an approach referred to as Motion Graphs, where the goal was for the designers to plug all of their animations into the system and for AI to figure out how the heck they should blend together. The idea behind it is pretty straightforward: if we have all of these animations recorded, instead of having humans sit and figure out how they should switch between one another, an algorithm finds the frames where two animations are similar enough that it acts as a transition point between them. This is achieved by finding a pair of frames – one in each animation – where the difference between is as small as possible. This is achieved not by finding frames in these animations that are similar to each other, but instead by finding the pair that is the least dissimilar – once factoring in speed and direction. Once the dissimilarity metric selects a pair with the lowest value, it adds them to a graph of all possible transitions. Once the motion graph is built, a character can simply run through the game to figure out the sequence of animations required in order to achieve a specific desired effect.
This is where the AI comes in, since while the motion graph fixes design part of the problem you’d still need to run a system to execute the correct animations. Hence starting in Absolution, a new animation controller was integrated that is reliant on reinforcement learning. If the system knows the current animation, the current direction and speed, then it needs to meet the desired direction and speed and figures out the current animation it should stay in, the direction and speed. The original version is reliant on a simple variant of a standard greedy policy, but its suggested that Q-Learning and basis function approximation have been explored and may have been adopted in later versions.
Through Absolution into Hitman 2016, the core AI systems were reinforced, the crowd system was optimised further and of course the bodyguard and VIP systems were integrated. This same process has continued through into the 2018 sequel, Hitman 2.
The details are still a bit sketchy at the time of this video, but there are some notable changes that have been made as each system has been revised and interated upon. The most obvious change is further optimisation of the crowd system, which can now handle over 2000 NPCs, with the second mission at the racetrack in Miami showcasing this to full effect.
Secondly the sensory systems have also received an overhaul, given that you can now shake off suspicious looks from NPCs by simply turning your face away from them, rather than having to break line of sight. Plus the player can now better hide within crowds as well as use vegetation to their advantage. But conversely, the NPCs can now see you in mirrors, which gives it a much more realistic behaviour.
Laslty, the third mission in Columbia ups the ante by having three targets with each of them having fairly large patrol routes they can move along. This has required even more flexibility where the games systems interrupt the paths these targets are taking if it becomes apparent that the player is attempting to complete a particular opportunity and force them back so you’re not having to wait 20 minutes in order to make your play.
All in all, Hitman is one of the richest and most complex games I’ve came across here on AI and Games, as a number of bespoke, highly tuned and optimised systems come into play to control a massive number of non-player characters, while also keeping the overall performance overheads as low as possible. It sets the standard for the sheer size and scale of reactive realtime AI characters and hopefully there’s still more to be come from IO Interactive’s franchise.
- “Creating the AI for the Living, Breathing World of Hitman Absolution” by Mika Vehkala, GDC Europe 2013.
- “Crowds in Hitman: Absolution” by Kasper Fauerby, Game/AI Conference 2012
- “Reinforcement Learning based Character Locomotion in Hitman: Absolution” by Michael Büttner, Game/AI Conference 2012.
- “Bodyguards and VIPs: A look at ambient, alert and evacuation AI behaviour in Hitman” by Jason Schroder and Thomas Egeskov Petersen. nucl.AI Conference 2016.