People tell lies all of the time…

We’ve all told lies. From when you were a little kid trying to avoid punishment to the other day…when you were trying to avoid punishment. Lying is mostly about avoiding something that is harder. Avoiding judgement, punishment, awkward conversations, or more serious confrontations. Lies have a good deal of utility. Manipulating the knowledge you have can gain you an advantage over others, or mitigate certain circumstances. It is so prevalent in human society that most religious, cultural, and political power structures have varying amounts of laws, rules, and norms dedicated to when and why it is appropriate or inappropriate to lie.

Obviously, if we’re working on a conversational AI, we’re going to want to look at this behavior!

Lying for fun and profit…

Lying is a deceptively difficult subject, however. First, you need to determine when and why a character may want to lie. The “when” is not too difficult: We can make lying an action and then use Utility Theory or some other way of gauging if lying gets an NPC to their goal better than other actions, such as using the truth. The more difficult part is the “why”. You can probably still stick to the variety of ways to get to this, including scripting the behavior, but here’s a list of the generalized circumstances under which people lie:

  • Avoiding a confrontation: A character may lie about knowledge of something to simply let the person asking go on their way as a low-cost method of avoiding a more confrontational situation.
  • Avoiding a punishment: Denying something is a good choice if you don’t want to be punished for it. You can even tell a lie on behalf of someone else to help them avoid punishment.
  • Helping someone (or yourself): People lie on their resumes, or about people when introducing them to others. Spies lie about themselves all of the time. Nepotism often involves lying to make the installation of a favorite into an undeserved position go more smoothly.
  • Hurting someone: Pretty much the opposite of the above, people lie in order to prevent someone from getting a job, to hurt their social standing, to prevent them from achieving a goal.

The reasons above can be scripted or you can use whatever method you think best to figure out just which situation lying is best considered for. Lie to the player too often, and they won’t trust anyone. Lie too little, and it doesn’t make a difference.

There’s just one problem…

With Interrogative, the data is quite granular, often consisting of specific predicates such as color, weight, size, scale, etc. And while you can look at those predicates and know that most lies are simple and relate to only one or two predicates or attributes, the problem is in how to manipulate that data so that it is a reliable lie.

The methods above talked about using methods like Utility Theory to figure out if a lie had more utility than telling the truth- but in reality, you’re going to want to check more than one lie. For example, if you’ve been caught by Sauron, and he wants to know who has the One Ring, it’s probably just as bad to say that Sam has it than to tell the truth (since Sam is always with Frodo). The good news is that only a few checks are needed to come up with a rational lie that points to someone other than Frodo or Sam, and lures the Dark Lord somewhere else.

So, once you can figure out when and why to lie, and find a manipulation of data that is most favorable for the action, you’re ready to actually manipulate the data. And that brings its own set of challenges- and advantages.

Semantic Gradients to the rescue!

A Semantic Gradient is basically a list of words or terms that act much the same as a Color Gradient, in that it transitions from a beginning state to an end state. The following Semantic Gradient is the list of terms that describe the Dominance trait from least dominant to most dominant in the Interrogative AI framework:


A Semantic Gradient does not need have a minimum number of entries, or even have to contain text. The concept here is that we have a spectrum of values that we can work with, and this spectrum does not need to be linear. It should, however, be a consistent set of data, so don’t mix ints, floats, and strings.

To make another example from the world of Lord of the Rings, Sauron had initially been defeated in a previous war, and had presented himself to his victors as being much different than his “Oppressive” self. He pretended to be “Deferential” in his actions, and concealed his true nature until he had fashioned the Rings of Power and brought the empire of Numenor crashing down. If there were a Sauron NPC reflecting this time period, and it was asked how dominant it was, it certainly would not pick the Oppressive term, which is accurate and truthful, but the Deferential term, which is what Sauron was masquerading as. Having these terms available in a format that conforms with the mathematics of the representation of the NPC’s personality is helpful here.

The real work is in having the predicates, attributes, adjectives, and other data points available to use in these situations. As a designer, you’ll need to know what your NPCs can talk about, and what they can lie about (if you’re restricting them to lying about certain things), and then build that data accordingly. And like the gradient above, it’s best to be able to reference your gradient with a particular predicate or attribute, and treat it like a data type so that it becomes reusable.

Colors are best represented as RGB, HSV, CMYK, etc, so you can create a function that can manipulate those data structures. The same goes with numerical data, which are sometimes as easy as multiplying or dividing by some multiple that represents how much of a lie or manipulation is being applied.

The hardest data type to address here is text. I would recommend using dialog templates and filling in attribute values from Semantic Gradients so that lies simply use that template and manipulate the data instead of trying to write permutations on sentences. You’ll find yourself drowning in work right away.

The extra feature this affords us…

One of the benefits of being able to manipulate data is that doing this allows you to also implement characters that exaggerate or understate things, or characters who simply have bad memories. Either randomly placed, or managed by some simple logic, the old forgetful gnomish characters are suddenly a lot more realistic.

This effect can also be applied selectively with logic to represent the effects of hypnotism, amnesia, etc. It’s just another tool in the toolkit for designers to bring their characters to life!


This is a high-level view of making NPCs lie, and the feature is very much a work in progress for Interrogative. The feature itself will likely not be ready in time for the version 3.0 release, but is being developed with an eye towards 3.5 or 4.0. I’m working on adding a feature road map to the website much like Unity has, so that the progress of features is updated as the versions progress.

Next blog: Changing data structures to accommodate new features, and thoughts about data in general…