BablBrain http://www.bablbrain.com Character for your characters Tue, 13 Jun 2017 19:03:17 +0000 en-US hourly 1 http://www.bablbrain.com/wp-content/uploads/2015/12/bablbraintwitter1-150x150.png BablBrain http://www.bablbrain.com 32 32 81272111 Parsing from text to graph using dictionaries, part 2 http://www.bablbrain.com/2017/06/13/parsing-graphs-part-2/ Tue, 13 Jun 2017 19:03:17 +0000 http://www.bablbrain.com/?p=307 If my life depended on timely blogging, I’d be dead ten times over. Regardless, there’s stuff to talk about, and it involves parsing text into graphs. I updated the dictionary we talked about in the last blog, and I’m currently in the middle of another update. The short story is that using lookups in the […]

The post Parsing from text to graph using dictionaries, part 2 appeared first on BablBrain.

]]>
If my life depended on timely blogging, I’d be dead ten times over. Regardless, there’s stuff to talk about, and it involves parsing text into graphs. I updated the dictionary we talked about in the last blog, and I’m currently in the middle of another update. The short story is that using lookups in the dictionary for parsing got me a much faster parse time, and I found yet more flaws in how I was representing the words, leading to another format and moving the dictionary into a database instead of a flat file that is parsed at startup.

Where we left off with parsing to graphs…

Last time, I had a pretty neat parser that was looping through words and using a dictionary. We did this to both tell what kind of word they were, as well as what kind of relationship they had with other words according to the kind of datatype they represented. Aside from the usual datatypes such as timestamps, numbers, and colors, semantic gradients were also widely used. We use that type for words that fall along gradients, like the words that fall between “good” and “evil”.

Then, we use some simple pattern matching of vector rules against the word vectors generated by the sentences. That would give us things like this:

Parsing text to a graph.

Parsing text to a graph.

 

That’s all well and good, but there were still problems with words that could be both an adjective or verb, and a noun, such as cross (“don’t cross me”/”a cross hung on the wall”). It was also relatively slow, since I used for() loops to iterate over basically everything. Not that for() loops are bad- but they can be slow when iterating over thousands of dictionary entries dozens of times just to find words.

Lookup, the parse times are falling!

One of the best ways to fix the slow parse times was to build a dictionary that loaded words into a dictionary object. In Javascript, that’s basically just making an object. Parse times, as you can see in the screenshot below, fell quite a bit:

Parsing times outlined in red.

Parsing times outlined in red.

From the version 1 dictionary average parse times of 26ms, we fell to 3ms. That was really good! But I wasn’t done. Next, I structured the rules into arrays that got rid of a lot of duplicated code, and parse times fell under 1ms, which felt a lot better. Not only am I looking to parse text into graphs that show relationships between data, but I need to be performant about it as well. At about this time, I also made the tool a bit snazzier looking, and decided to parse the whole dictionary to a graph to visualize it:

It took a while to load...

It took a while to load…

Words are grouped by their root word

Words are grouped by their root word

Closer

Closer

All that done, I began to look at the persistence of the information. At this point, I was loading a file in and parsing it, holding it in memory, and then writing it back to a file if I wanted to edit it in the tool. It worked, but the drawback was that there is no place to “remember” the parsed information between sessions. Also, editing a flat file is a pain in the ass. So it was time to begin working on version 3 of the dictionary.

The new, new dictionary

I fell back on MySQL, not only because that’s what I know, but also because a lot of the graph databases seem to be built around triplets and such, and what I was representing in my data was a lot less neat than that. Interrogative already uses a hierarchical representation of its knowledge, and these graphs are eventually going to supersede that representation. So, I combined the techniques so that we had lookup tables for words and gradients, and the dictionary entries got stored in a table that could be maintained much easier than a flat file.

That’s a low bar to meet right there. With these tables in place, I wrote a data access layer based on previous work in that patiently-waiting demo game, and as of this writing, I’m chugging along on getting that integrated to see what the parse times look like.

Where to?

With the new dictionary version being set up, I’m looking to see what the performance looks like, though I imagine it won’t be too much slower. I’m still loading the dictionary into memory, so it’s still using lookups, and will still be fast. But that’s not the main concern anymore.

My main concern is to begin implementing more advanced usage. Now that we can persist parsed text into knowledge graphs, we can:

  • Update those graphs by parsing more text.
  • Update those graphs by bolting on a conversational interface a la Interrogative (this is something of a must).
    • Use the above conversational interface to query the graph like a chat bot.
    • Overlay the Personality-based-AI discussed here to give that conversation a bit more personality.
  • Parse multiple texts in parallel into multiple knowledge graphs pertaining to the same object/subject.
    • Compare, contrast, and merge the knowledge graphs.
  • Begin addressing information that changes over time (movement, state changes, time, etc).
  • Begin working on reasoners to iterate over knowledge that represents actions.
  • If I can find time, I’d love to dabble with some machine learning pumping its output into these knowledge graphs. Much of ML/DL these days seems to be moving in the direction of related things such as memory, attention, etc. A knowledge graph is really just top-level knowledge.

Until next time!

I wish I had more time to work on this! I’m not treading too much new ground here- this being a mish-mash of semantic web, ontologies, and game AI techniques. All of that, just to suit my needs for NPCs that have a coherent world model in one place. It may not end up ticking every checkbox (it won’t), but it should yield some good techniques for more advanced NPCs and games. Especially where narrative needs a good knowledge representation system.

Next time: Another update on the dictionary and tool, and hopefully some movement on getting it integrated into the demo game NPCs.

The post Parsing from text to graph using dictionaries, part 2 appeared first on BablBrain.

]]>
307
Using a dictionary to build graphs from text http://www.bablbrain.com/2017/03/23/using-dictionary-build-graphs-text/ Thu, 23 Mar 2017 21:43:18 +0000 http://www.bablbrain.com/?p=303 Like a drunk who wakes up in the alleyway after a bender, I’m sitting here typing about some recent months of progress in natural language processing, especially where it concerns using a dictionary to help parse the language. It’s all still related to the Interrogative AI I’ve been working on, of course, and for conversational AI in […]

The post Using a dictionary to build graphs from text appeared first on BablBrain.

]]>
Like a drunk who wakes up in the alleyway after a bender, I’m sitting here typing about some recent months of progress in natural language processing, especially where it concerns using a dictionary to help parse the language. It’s all still related to the Interrogative AI I’ve been working on, of course, and for conversational AI in general.

Basically, the problem I was facing was how to interpret what a player says to a character. At first, I resorted to simple actions that the NPC would filter through their personality traits to respond to, and that was okay. Then, I went a bit further, since I was implementing a text interface for a demo, and included a list of words and simple sentiment markup (-1 for negative, 0 for neutral, 1 for positive) that I’d find in statements.

That was less okay, as I quickly found that unrestricted text input makes for some statements that are open to interpretation. Then, I decided to look back to how I’m using attributes in the AI for inspiration. Maybe expanding the attributes to be a lot more flexible was an idea that will pay off better than I originally thought. Read on, my friends…

What’s a word?

One of the guiding principles of Interrogative was that words were actions- very complex actions, but actions nonetheless. I call the AI system “Interrogative” because I started out using interrogative words as actions against information as a game mechanic, and as a method to get NPCs to answer questions and address the player and the world.

But when information is posed instead of queried, it’s a bit different. The NPC has no database of attributes and values to look up and respond with. The system now needs to parse the text input of the player, figure out what’s being said, and then feed that to the NPC in a way that it can then respond. Because the system is based on dynamic information and a designer-specified domain, I’m avoiding canned statements. They can be done- but we’re not here to talk about easy things. So now we need to parse: Enter Natural Language Processing and Understanding.

Defining it

Without going into specifics, the majority of the NLP algorithms deal with statistics, and extracting high-level information such as the subject of a sentence or sentiment analysis. What Interrogative needs is something a bit more robust: Models. A model is a set of data that describes an object in the world. The object can be a physical object or a concept, but the data operates the same way.

The player types something like “NPC is an idiot” and that builds out a simple model of data stating that the NPC has very low intelligence. The NPC can then compare that to its model of self. And if the presented model is negative, positive, or some other measurement different from what it sees of itself, it can act accordingly. If it’s an Orc, it will probably whip out its axe and show you how smart it is at hacking you to death, you condescending jerk!

So that’s the process, but now the issue at hand is: How does NLP tell you what “idiot” means? Simple answer: It doesn’t. You can maintain a list of words for sentiment analysis, which will solve that problem, but then you’ll run into another problem: What if the player says “NPC is a fool”? That’s not the same as “idiot”, though it is also usually negative in context. Beyond sentiment, which changes in context, what the hell do these words mean?

One way to answer this, and the way that I’m using for my techniques, is to use the same method I used when I put together the personality traits for my NPCs: Spectrums of measurement along which the words are placed, describing a relative amount. “Smart”, “brilliant”, “stupid”, “unintelligent”- these all belong along a vector between two words that describe the extremes of relativeIntelligence. “Bold”, “adventurous”, “timid”, all can be placed along another vector with words describing the extremes of relativeConfidence. You can evenly space those words (implicitly, if they’re assigned no values), or you can assign values to them. Either way, now I had vectors to fill with words.

Creating the data

So, with the idea that I could parse the text and “know” what a word was, I set out to do something that was…fucking tedious. I created a dictionary of words mapped to arbitrary vectors. As of this writing, there’s 4581 entries in this dictionary and growing. I divided the words into several lists of the obvious categories, not all of which are mapped to vectors: Adjectives, adverbs, verbs, nouns, pronouns, prepositions, etc. Words that describe- adjectives- were the first to get mapped, and the easiest!

For many of those, especially where it dealt with concrete mappings from words to data such as colors and number words, I gave values. In doing that, a program now definitively knows that “red” is #FF0000 or RGB(255,0,0) (whatever format you choose). If a player types “<car> is red”, then the NPC can understand what the player is saying, and be able to know (if it actually knows the color of the <car>) if that statement is correct or not. Helper functions will also allow the NPC to change that data, for a variety of reasons.

For the adjectives that are more subjective, placing them on a vector with a value allows the NPC to “know” that “stupid” and “moron” are not quite the same. If they view themselves as average intelligence, then they’re being told they’re less than, and then the logic required to react to that is that much easier to implement.

I am now trapped in dictionary hell…

Of course, this all sounds great! But it took days and days of going to dictionary.com and looking up words, trying to figure out which vector was best for an adjective, and then typing all of that in. I eventually wrote a tool to help with it but it was still maddening. At this point, I’m not aware if there are any resources that provide this kind of information. I asked on Twitter, but no one responded- which means either no one knew, or thought I was stupid for even asking.

Semantic Web ontologies such as RDF and OWL do provide some information, especially where it relates to relationships or categories of certain concepts, but I wasn’t able to find quite the dictionary I was looking for. There’s also Gellish, which is a formalized English- but that costs money, and has licensing and stuff, and so as a result, I have no idea if that provides this kind of information (unless I’m mistaken, I think it provides something similar to, but more detailed than the Semantic Web ontologies).

Action words act on vectors

I ran into some hiccups in trying to figure out what to do with verbs: They’re not descriptive words, except in that they describe an action, which is not in itself something atomic. But verbs aren’t always a high-level concept. They are actions that operate on the present value of those vectors whose value is represented by adjectives and other words. “Amuse” adjusts the calmExcited vector in the positive direction. “Alert” adjusts the relativeVigilance vector. And so on.

This is a limited representation of a verb, of course, as verbs such as “walk” are physical actions that do not fit neatly into adjusting a value along a vector, but more or less adjust a more complex state machine or graph representation of the data involved.

It’s all a graph

And that last part about graphs is where things started to come back around to the model-building portion of the text parsing: As humans learn, they build out extremely complex graphs using neurons and synapses inside the brain, and then iterate over that graph via the spiking behaviors of neurons, and the limited calculations further provided by the branching synapses.

At a high level, neurons, or groups of neurons, map to symbols such as words or concepts, which are closely related to each other on the physical surface of the brain. We don’t need to simulate all of that, but we can build a graph of knowledge that can be queried. It can also be iterated over, if you’re using some of those words to describe states being used to represent values for the NPC. The brain even creates graph-based maps of its own body as well as external areas (something that is started to be experimented on in AI now). Graphs are useful!

I’ve begun representing the text as a graph, though in its current form, it does represent some serious challenges where knowledge representation is concerned. Some of that is really just in how I’m parsing or presenting the data. Some of it will entail some new ideas on how to relate the information (such as how to represent episodic or sequential information). But the early progress, especially since I’m doing this with an hour here and there in my free time, is promising.

Text parsed into graph representation using a dictionary.

The relation labels are usually the vector name, and where a word is novel or does not have a defined vector, “attribute” is used.

This is just the beginning…

There’s a lot to be done with this dictionary/parsing technique that I intend to tackle- not just for games, but in general:

  • Rewrite the dictionary to allow for multiple “definitions” (vectors, values, etc).
  • Account for words that are verbs, nouns, or adjectives, depending on the context (partially addressed by the dictionary rewrite).
  • Use parsing to figure out which vector is best for the context of the word.
  • Persist the graphs from parsed text and then query and compare it through the Interrogative chat interface.
  • How to parse text to define new words?
  • Techniques for building out models and iterating over the states like an FSM or Behavior Tree.
  • Predictive iterations based on available information in the model (that tiger has claws, and claws hurt- it can hurt me).

From there, it’s a pretty robust AI that seems way too heavy for use in games- especially multiplayer. However, the parsing of input text can be done client-side, as what matters is that the information that the player is typing is accurately conveyed to the NPC for processing.

It’s also a pretty robust AI for building a “chatbot” on- more on the level of Viv than a lot of the shitty app chatbots that are really just text fields and menus blurted out in chat format to give the appearance of a conversation for the sake of a trend.

 

Next Blog: More about the dictionary, rewrite progress (I’ve hardly started 🙁 ), and maybe some commentary on other AI stuff. I have opinions!

The post Using a dictionary to build graphs from text appeared first on BablBrain.

]]>
303
Demos, and more AI work… http://www.bablbrain.com/2016/11/07/demos-ai-work/ Mon, 07 Nov 2016 16:43:58 +0000 http://www.bablbrain.com/?p=299 Time flies when you’re working on stuff… The good news is that I have a demo coming for Interrogative AI, so that you can play with it. The bad news is that it’s not ready today. The good news is that the bad news is because it’s a multiplayer game with RPG capabilities. Because a […]

The post Demos, and more AI work… appeared first on BablBrain.

]]>
Time flies when you’re working on stuff…

The good news is that I have a demo coming for Interrogative AI, so that you can play with it. The bad news is that it’s not ready today. The good news is that the bad news is because it’s a multiplayer game with RPG capabilities. Because a static demo was just way too easy for me! The bad news? It’s still not ready today 🙁

Have a screenshot:

AI Works of late…
Roguelike-style demo for Interrogative.

Roguelike-style demo for Interrogative.

It’s nothing too special (for now), but this will serve as the world that my Interrogative AI will inhabit. Richer in data than visual content, this demo will be a proving ground for a lot of the AI that I’m working on. And speaking of which…

More language AI…

A lot of the research I’ve been doing into AI lately has been trying to look at more generalized AI- though not the “strong-” or “general-” AI that most people might be familiar with. Instead, I’m looking into a more adaptive AI that seems like strong or general AI, but is really more like a flexible system that allows for extensions of its domain knowledge, using Semantic Data.

To that end, I’m looking at doing a few experiments with data using some semantic techniques, in the hopes of developing some lightweight algorithms that can be leveraged in game AI to give the NPCs a lot more fluidity in how they interact with players and the game world. A few of those experiments involve creating a formalized language dictionary (English, for now) that will allow the AI to use language in a more flexible manner than simply being given Dialog Templates or standard statements. We all know how to use words, and we have dictionaries that tell us how to use even more words. AI shouldn’t be stuck out in the cold when it comes to understanding words.

World modeling…

Another issue that I want to tackle in AI is that of how the AI interacts with the world. One thing that humans (and animals) do is create an internal model of the world around them. This model is constantly updated it as we move about, doing things, and gathering feedback. Internally, you know that if we knock a glass off of the kitchen table onto a tile floor, it’s probably going to shatter. We know that if we do the same, but the floor is shag carpet, then it might not shatter. Our brain doesn’t need to model physics in any complicated way to know that. It just simply knows that the glass will fall, and come apart on impact, because the tile floor is hard. Within our minds, that’s a general rule: Glass stuff hitting (or hit by) hard things will break.

Our brains store those rules as facts. Some of those rules and facts are shoddy because they’re based on our limited senses of observation. We do with what we got, and sometimes it’s a bit off.

So easy a baby can do it…

It’s simple, right? Too simple. I remember watching my daughter teaching herself how to sit in these kid-sized folding chairs we bought our kids. My daughter didn’t need 10 million sets of data to learn how to sit on the chair. She adjusted on the fly, knew what the end result should be, and learned that the folding chair was not the same as sitting on a box, couch, or bed. After a second try, my daughter had mastered the chair. After that, she knew that those chairs are different, and her mind built rules on the fly for how to treat them. The rules she formed are based on knowledge gained at first by classification similar to Machine Learning, and encoded as Semantic Data in her brain.

Machine Learning is currently pretty bad at anything much more complicated than classification. In my opinion, ML is not even AI because it doesn’t even try to avail itself of any understanding of the world it inhabits. It builds no world model, and if the data that its been fed changes suddenly, then it needs tons of that data to readjust. For the domains they’re trained on, ML works really well. I think that’s great. But it’s just way too narrow in scope. It’s a great tool. But it’s just a tool.

Conclusion…

Semantic AI is where it’s at.

You might think that that was something of a departure for me from game AI to techniques that do more complex things. But in my mind, these techniques should naturally be lightweight compared to what we’re doing now. Games benefit from having faster, more realistic AI.

The post Demos, and more AI work… appeared first on BablBrain.

]]>
299
Opinion: Of course our brain processes information… http://www.bablbrain.com/2016/05/25/opinion-brain-information/ Wed, 25 May 2016 19:28:51 +0000 http://www.bablbrain.com/?p=296 So, I read this interesting article by Robert Epstein titled “The Empty Brain”, in which he asserts that the brain does not process information, and that it is not a computer. He also asserts that the brain does not store memories. And other stuff. Anyway, I wanted to provide some counter-arguments for this article, because […]

The post Opinion: Of course our brain processes information… appeared first on BablBrain.

]]>
So, I read this interesting article by Robert Epstein titled “The Empty Brain”, in which he asserts that the brain does not process information, and that it is not a computer. He also asserts that the brain does not store memories. And other stuff. Anyway, I wanted to provide some counter-arguments for this article, because I think it ignores some pretty important things that we know about the brain…

First things first: Of course it processes information…

I don’t mean to be too dismissive about the ideas he presents, because I do think that Epstein has a valid point in that our brains don’t work in the same way as the computers that we’ve built work. A computer processes information in binary 1s and 0s, while our brain is basically a massively connected series of cellular salt-water batteries (neurons) that discharge their energy when stimulated by the discharges of other neurons or the nerves that connect to them. They’re both electrical, but there’s not too many similarities past that. Neurons grow and strengthen their synapses, connecting to thousands of other neurons a piece, resulting in trillions of connections. Silicon chips have a set number of connections, unable to grow or change on the fly when given new or different information.

However, neurons do indeed process information. Take the eye, for example: How do you know what you’re looking at? Light hits the eye, and rods and cones transmit the stimuli they receive to the Optic Nerve, which in turn sends the impulses to several parts of the brain. Layers of neurons then reconstruct the visual information to decipher color, shapes, and context, and then associate those with other information, such as the fact that seeing the letters “c”, “a”, and “t” in a row usually results in the recall of information not contained in those letters, such as facts about felines (four legs, furry, evil, uses litter boxes, evil, makes a sound that sounds like “meow”, and they’re evil).

The fact that the drawings that form those letters prompt the recall of information point directly to the light entering your eyes being processed into a format that the brain understands and works with, associated with information known about those letters, associated with information known about the word they form, and then prompting the immediate recall of related known information. All of that, but also with information about what your body is doing at the time so that your own movement and position are processed at part of that context (that whole video is worth a good watching, by the way). That is processing. Not binary 1s and 0s or mechanically accessing a hard drive, but the wetware style of processing that many animals with neurons possess.

Wetware means never having to access a hard drive…

I kind of jumped ahead in the previous section when talking about the brain processing information when I talked about the fact that looking at a series of letters can prompt an immediate return of related information. But I didn’t jump ahead, because the way the brain works means that processing and retrieving information go hand-in-hand. The firing of neurons in the brain usually involve several areas of the brain, and of those areas, the motor cortex and sensory regions are often involved. This is true even when you’re not actively moving or smelling or seeing or hearing. Thinking of the color red will activate the visual cortex. Thinking of movement will fire neurons in the motor cortex.

In fact, only the brain stem is really immune from being accessed by memories due to the fact that it runs automatic processes in the body, such as breathing and heartbeat- though even there, there are techniques one can use to access those and other faculties. The brain, in processing information, immediately tries to tie it to existing information and that act is an immediate “retrieval” of information already possessed by the brain through previous stimuli.

Stating that the brain stores no memories because the brain does not work literally like a hard drive is just plain wrong. Recent research is beginning to discover which neurons are the ones that “hold” them, and even how to manipulate memories or encode images into the format that the brain uses to see. And while none of these things work the same way as in the computer hardware we’ve developed, it does not mean that the concept is not still true.

And why are we up in arms about these computer terms anyway?

This brings me to my next point: Who cares if the best terms we have to describe the processes of the brain currently are analogous to computers? I mean, if you would like to specifically name the processes that the brain uses in its internal workings so that it’s not confused with the processes of a computer, fine. But, stating that the brain does not process or possess memories because you don’t like the terms or analogies used is throwing the baby out with the bathwater. Instead, it is enough to state that the brain does not process or store information like the computers we have built. To that, even non-experts reply “duh”. I get that clicks are needed, and there are many theories about the brain, probably the majority of which have certain degrees of correctness in them (or will, once we’ve figured things out, which will take a long time). But stating that the brain just somehow experiences things without calling the processes by the best terms we currently have available is turning a blind eye to the knowledge we’re currently gaining in the workings of the brain, because you don’t like the words used.

To sum my opinion on the article up: Not to say that Epstein doesn’t know his field, but he seems to be lagging a bit behind some of the things they’re finding, or maybe just not thinking that those discoveries are important or evident enough. Dismissing information because you think there’s a better term for it is not the best way to gather information.

The post Opinion: Of course our brain processes information… appeared first on BablBrain.

]]>
296
Evolving the Dialog Template http://www.bablbrain.com/2016/04/06/evolving-dialog-template/ Wed, 06 Apr 2016 18:58:46 +0000 http://www.bablbrain.com/?p=291 So, I’m taking a hammer to the current iteration of Dialog Templates for Interrogative… The current iteration turned out to be a good starting point for a much more flexible and resilient system. It’s slower than I’d like, with less flexibility and too many functions needed to make it work. It leans on too many […]

The post Evolving the Dialog Template appeared first on BablBrain.

]]>
So, I’m taking a hammer to the current iteration of Dialog Templates for Interrogative…

The current iteration turned out to be a good starting point for a much more flexible and resilient system. It’s slower than I’d like, with less flexibility and too many functions needed to make it work. It leans on too many things, and is not properly encapsulated. What Interrogative needs, in order to match the flexibility and power of its back-end querying and knowledge representation, is a Dialog Template system that is equally flexible, resilient to missing data, able to be localized easier, and much more simplified.

It’s one of the most important systems!

The Dialog Templates are the portion of Interrogative that the user will be looking at. Regardless of whether you choose to use NLP to decompose the users’ text, drop-down menus, or buttons, it’s the text of the conversation that matters most. It’s the end-result of figuring out what the player wants, sending it to the server, having the server create a response, sending the data back, and then formatting that data in a way that looks like dialog.

Internals of how that all happens is lost on the user, and should be, since they’re more concerned about their character’s story. But the internals have been bogging down, as of late.

During GDC, I sat on the second floor of the West Hall and started ripping things out of the editor that had been made obsolete by some minor improvements. Things like string fields for knowledge categories where an int field accomplished the same thing more efficiently, and moving some GUI items around. However, as I played with the conversations, I kept running into issues with missing data, and the failure of the current Dialog Templates to handle it as well as it should.

Bad Dialog Template!

The biggest issue was in describing things, and how that query is due for an overhaul. The core issue is that there was legacy code that relied on categories that were basically relied on from the Epic Frontiers days, and that carried forward with the inertia of not being broken. Then it broke! Attributes allow for hierarchical categories, and that means that describing things no longer gets as neat as it once was. Instead of having a handful of purpose-made description functions that pulled data for describing a tool, vehicle, place, or living entity, the ability to define what an object was more broadly means that the description functions need to be similarly broad. And that presents a few problems.

For an object that is of an arbitrary category, you will need to pull the known data, and use that to describe the object, dump said data into a ResponsePayload object, and then fire that chunk of knowledge off to the client, who will then take it and stuff it into a Dialog Template, assemble it into a string, and then print that out onto the screen of the waiting user. Easy, right?

…what he said…

There’s a few issues to address, and some are sort of arbitrary…

  • Given a description of an object, to which you don’t know the attributes (because it could be any object, whatsoever), then in what order do you list said attributes?
  • How many attributes do you include in the description? Five? Ten? A hundred? Write a book?
  • For custom Attributes being pulled, how do you reference them?
Solving the custom Attribute issue- with more data

Addressing these in reverse order, the fact that Interrogative allows you to add custom Attributes means that the Dialog Templates need to know what kind of text to put around it, or else you get text like “that thingamabob is 8”. “8” what, exactly? And it’s here that small details make dialog fall to pieces, even as the system can do everything else correctly. Attributes need to be able to tell the Dialog Template how to reference them, so that you get proper text like “that thingamabob is 8 credits”. Small change, huge difference in context- especially when you have a conversation where you are talking about multiple subjects. Data in context is important. “2 by 3” is less descriptive than “2 by 3 inches” or “2 inches by 3 inches”.

Attribute overload!

Now that we can get any Attribute and have its context pulled into the Dialog Template correctly, just how many Attributes do we want to do that with? If you ask Scotty to describe the Enterprise, then how much should he tell you? He knows virtually every nook and cranny of that ship. He can describe his quarters on the ship with more accuracy than other locations- should that be included? No, that’s insane. So at what level of detail do you describe an object?

So, there’s more than one way to skin this cat…

One way is that we simply limit the number of Attributes allowed into the description. Set a number arbitrarily as the deafult, allow it to be customized, and leave it at that. I’m lazy and can’t be bothered!

Or, slightly more complex is to have a preferred set of Attributes to use first, and then include a set number of additional Attributes. I’m less lazy, and have spent more time on this problem. Fist-bump!

Or, maybe let data do the talking… Look, custom Attributes got me into this mess, and custom Attributes will get me out. Creating another Attribute that states what Attributes to look up for a description query is possibly the best route. It allows the developers to quickly say “pick this, this, this, this, and this to describe this category of object”, and if you use that list to order the Attributes, then you kill two birds with one stone. Unless the developers never bother with that- NOW WHAT?? Well, in that case, see the above ideas. I think a stock list of description-friendly Attributes can serve purpose if the custom description Attribute is not present as a decent fallback mechanism.

And what if none of the stock Attributes are present, and nothing else gets returned? Error Dialog Templates are good things to have, and sometimes, just let your NPC exit that predicament gracefully.

I’ll get right back to you on that, k?

Meanwhile, client-side…

This is a lot easier to deal with. Once you know your data, and have your constraints in place, you can make a Dialog Template object that can suffer the vagueness of data that the server responses subject it to. A Template Manager receives the ResponsePayload and picks the correct template, and then gives it the data that it needs to assemble the text and present it to the user.

A lot of this data is going to be numeric, and so the client-side will have a handy little lookup table to know how to translate the numbers into text, according to the Attributes that the data corresponds to. ColorHSL? There’s a function for that. A Float[3] for size? There’s a function for that. What was game-usable data on the server now translates to human-readable text on the client, without the client having too much access to the server, and without the server needing to do needless string and parse operations that it can safely farm out to the client.

Easy, right?

…sure…

That’s all, folks!

I’m pretty sure the Dialog Template saga is not over. It’s going to continue to evolve, as will the rest of the architecture. The goal is to make the system as flexible as possible for what’s to come (queue thrilling music).

Next Time: Updated screenshots of the editor, screenshots of the new Dialog Templates in action, and even documentation ahead of the release!

 

The post Evolving the Dialog Template appeared first on BablBrain.

]]>
291
Opinion: The good and bad of AI in the near future… http://www.bablbrain.com/2016/03/24/opinion-good-bad-ai-near-future/ Fri, 25 Mar 2016 03:00:08 +0000 http://www.bablbrain.com/?p=278 AI has a lot in store for us in the next couple of decades… Not all of it will be good (that’s our fault, not AI’s)- but a lot of it will, and even more of it depends on us and how we use it, both as individuals, and as as corporations, countries, and allies […]

The post Opinion: The good and bad of AI in the near future… appeared first on BablBrain.

]]>
AI has a lot in store for us in the next couple of decades…

Not all of it will be good (that’s our fault, not AI’s)- but a lot of it will, and even more of it depends on us and how we use it, both as individuals, and as as corporations, countries, and allies or enemies. The transition, however, may in fact be brutal…which is why we need to AI responsibly.

Click on to find out more…

It keeps popping up in conversations and my newsfeeds…

Even the non-technical people I know have heard something or other about AI- and how can they not? Boston Dynamics comes out with amazing videos detailing the state of their robotics, while Google and Facebook spout off about their AI achievements in Go and being able to tell me where Frodo took the One Ring in Lord of the Rings, respectively. Tesla makes progress on all-electric cars, while Google works on making them drive themselves (occasionally into buses, heh). 3D printers, easily corrupted chatbots, hoverboards, etc… It’s the future!

And people are starting to get a bit spooked…

Before I dive in the really spooky bits, a disclaimer: AlphaGo, Watson, and DeepMind are nowhere near SkyNet. Neither are Google self-driving cars, or even the military’s own drones (easily hackable, as Iran proved when they spoofed GPS on one of our RQ-170’s and made it land in their territory, claiming it for a prize so that they, too, could build R/C airplanes). I don’t even think Go players should be saddened by this news. We still play chess, and Gary Kasperov was beaten by Deep Blue in 1997. Microsoft’s Tay chatbot couldn’t withstand internet trolls for even 24 hours before they pulled the plug (I’m might have to write about this in its own blog)!

But more importantly, what all of these AI have in common is that they were purpose-built. None of these are capable of “General Intelligence“, which is what is required to even begin approaching our own, much less advancing past it. A Google self-driving car simply cannot decide to murder someone- that kind of knowledge lay outside its comprehension the same way we cannot see time. However, that same car will need to swerve in an accident some day, and will need to decide which shitty decision to make from a list of more or less shitty decisions that all end in someone being hurt or killed due entirely to circumstances, and on that day, people will become extremely suspect of self-driving cars’ AI. This is ironic, because humans with higher forms of intelligence are killed on roads daily and we think nothing of it beyond slapping stickers on our bumpers to make ourselves feel better.

So yeah, I’m not worried about AI taking over anytime soon.

AI will, however, cause us some amount of pain…just not how you think…

Now that we’ve established that AI isn’t going to murder you in the face, let’s get on to what it will do to you. There’s a good chance that AI will take your job if you’re not retired by the 2030’s. Self-driving cars will likely replace truck and taxi drivers on the roads, both for safety and efficiency (AI doesn’t need to sleep, and you don’t have to argue with it when it takes a wrong turn- because it usually won’t). Some pilots will be out of work too- though here, I think mostly in the cargo sector, as most people would be comfortable with pilots in commercial planes (even though the truth is that auto-pilot flies a lot more than you think, even now).

Routine office work will get automated, as well as call center work, once AI can carry on conversations better (and even that won’t be as high a bar once automated assistant software starts making calls for you, and then AI can talk in more efficient and stable dialects). Warehouse jobs are already going away and manufacturing, as we all know (especially every election cycle), has been utterly gutted. Those jobs won’t be back. Even workers in China are starting to lose out to robots, like at Foxconn.

Construction work will continue to be partly automated, though I think some of the more manual work will stay for a while until bipedal or more specialized robots become very affordable and also very durable. The military will keep most of its humans, though drones and robots will form the vanguard, and shattered technology will litter the center of every battlefield. In space, our pioneers are already robots- and that’s a good thing! Clothing is approaching the point where clothes can be woven and sewn by one machine, allowing for almost complete automation of fashion.

Some more interesting professions that lose out would be prostitutes (sex bots– don’t shake your head, you know it’s going to happen…has, actually) and even sports, where more risk can be had with players piloting robots remotely, and thus smaller teams (okay, that one’s stretching it, I’ll admit).

And before all of the above finishes happening, we will see backlash…

We see it even now. Some of the tech industry’s luminaries are weighing in on AI, and not positively. Elon Musk and Stephen Hawking worry about Artificial General Intelligence displacing us. The military has worried about it (as it should, being an organization tasked with spotting threats within trends), even as it works toward advanced AI applications. The general public worries over mundane advances such as AlphaGo, oblivious to the threats to the job sector, still thinking that trade policies can fix what is essentially a completely different issue.

My own thoughts here are that some countries will enact some laws barring AI to protect workers. Some will allow for the automation and deal with whatever consequences may come, adapting more or less successfully. Many countries are going to get steam-rolled, sadly, and that’s going to lead to a lot of other bad things. Suffering economies with idle populations don’t usually sit still for long before the blood starts to run. Old tensions buried by economic good times will rise to the surface, and civil wars will break out, borders will get crossed, and more than a couple of governments are ripe for a fall. What replaces them might be very reactionary, and more horrible than what may have proceeded them. Some areas may simply disintegrate into lawlessness (Mel Gibson may make an appearance here and there to help small communities being threatened by hockey-mask-wearing sociopaths in muscle cars).

But it’s not all doom-and-gloom!

Mostly, however, I think the economies of the world will adapt to more regional and local economies, opting into the mass-produced cheap products at will, while relying on more personalized products for the majority of life’s needs. Even now, outdoor companies are leaning towards more sustainable, more costly, but very durable clothing. Designed to last decades rather than a year, customers pay more to get more. Using the same technology that put many out of work, local economies can fight back on an even footing by churning out higher-quality products created locally, and more personalized.

Even cities, which currently are completely dependent on surrounding farming and industrial output for survival, can become more resilient to economic or other collapse. Advances in farming and manufacturing allow the placement of food and other product sources within cities themselves, allowing them to feed their populations from within for most foods. Energy sources are moving to renewable, and grids will eventually become more decentralized, making widespread outages a thing of the past, and energy something that your own building would generate.

Don’t even get me started on what medicine will be like. AI will revolutionize it, to the benefit of everyone.

Getting from here to there safely…

There’s a lot of debate about the pros and cons of AI and robotics, and there’s way too many moving parts to this debate to cover here in one blog post. I will say this though: As in all things, careful thought and consideration needs to be taken. Uncontrolled deployment of robotics and AI will bring about some nasty consequences. We have to be smart shoppers about this, and not only allow AI and robotics to advance, but also respond accordingly.

Sitting on our hands is no option, and neither is being reactionary and throwing the baby out with the bath water…

We need to advance AI, and we need to do it responsibly.

The post Opinion: The good and bad of AI in the near future… appeared first on BablBrain.

]]>
278
Interrogative Editor dev shots for early March, 2016 http://www.bablbrain.com/2016/03/03/editor-dev-shots-march-2016/ Thu, 03 Mar 2016 21:09:38 +0000 http://www.bablbrain.com/?p=259 The Interrogative 3 Editor in all of its glory! Okay, most of its glory. I finally settled on a color scheme for the editor, and after moving a few things around (and a few things yet to be moved), the editor is in a state good enough to show off. Take a look! Almost done… […]

The post Interrogative Editor dev shots for early March, 2016 appeared first on BablBrain.

]]>
The Interrogative 3 Editor in all of its glory!

Okay, most of its glory. I finally settled on a color scheme for the editor, and after moving a few things around (and a few things yet to be moved), the editor is in a state good enough to show off. Take a look!

Almost done…

There’s one tab not shown here, which is the Dialog Template Editor tab. It’s not yet done. It should be by after GDC, but right now getting things ready for the demo is taking priority. Suffice it to say, it will allow you to customize your Dialog Templates in many ways, and create sets that allow you to have multiple speech styles for your NPCs.

  • Dashboard: This is, by default, where you land when you open up the Interrogative 3 Editor. You get one-click access to the manual, tutorials, options, and at-a-glance stats for your game world data.

Interrogative 3 Editor Dashboard

  • NPC Editor: Here, you can create, edit, and delete NPCs, and view all of the attributes you’ve attached to them, as well as assign them knowledge of the game world. Double-clicking on the values in the Data Value column will being up a procedurally-formatted GUI (which you can specify to a large degree when creating custom attributes).

Interrogative 3 Editor NPC Editor

  • Attribute Value Editor: This is a sample of the customized GUI that gets shown for editing an NPC’s personality traits. Each attribute has a format associated with it (defaulting to a text box, if you don’t want to specify any when creating new attributes) that determines what controls get shown.

Interrogative 3 Editor Attribute Value Editor Dialog

  • NPC Summary Tab: The vagueness of personality traits can sometimes make you wonder about what kind of NPC you’re creating. I’ve made this handy-dandy tab so that you can compare your NPC’s traits against some common pen and paper RPG alignment systems, as well as other measurements of NPC personality likelihoods, such as forgiveness, rendering aid, becoming angry, sad, etc.

Interrogative 3 Editor NPC Summary Tab

  • NPC Template Editor: Aside from being able to assign about 40 pre-made personality trait templates to your NPC, you can also create your own! Move the sliders, average in other templates’ values, and many other handy functions make tailoring your NPC templates a breeze!

Interrogative 3 Editor NPC Template Editor

  • Knowledge Editor: If the NPC Editor is the brains of the operation, this is the library of information you throw at it! Create and edit objects, add attributes, and set knowledge levels at the object or attribute level to partition your information. Then, load your NPC into the editor and assign the knowledge to them so they can talk about it (or use the information in other ways- it’s your data, after all).

Interrogative 3 Editor Knowledge Editor

  • Testing Tab: Once you have information and an NPC with access to it, you can test out conversations in the Testing Tab. A graphical conversation constructor makes testing queries and statements easy, and you can even talk to multiple NPCs at a single time.

Interrogative 3 Editor Testing Tab

That’s all for now!

Next Blog: More dev shots- likely of the Dialog Template Editor, Attribute Editor, and some sample conversations (in gif form, I hope!). And maybe some shots of the demo (which is a lot like the conversations in the Testing Tab, but the NPCs are little sprites in robes- who don’t like little sprites in robes?).

The post Interrogative Editor dev shots for early March, 2016 appeared first on BablBrain.

]]>
259
Updating the data structures for Interrogative http://www.bablbrain.com/2016/01/11/updating-the-data-structures/ Mon, 11 Jan 2016 16:39:01 +0000 http://www.bablbrain.com/?p=224 It’s all about the data I have to admit that I had resisted the urge to change the tables in the database for a few months, mainly because I’m as lazy as anyone else, and all of my work so far had been done against the data structured as it was. However, I find it […]

The post Updating the data structures for Interrogative appeared first on BablBrain.

]]>
It’s all about the data

I have to admit that I had resisted the urge to change the tables in the database for a few months, mainly because I’m as lazy as anyone else, and all of my work so far had been done against the data structured as it was. However, I find it hard to think that people would like to buy an AI tool that asked them to add 20+ tables to their database, many of which had only a dozen or so lines of data in them. It was wasteful. And besides, there’s a few gains that get made with using 3-4 tables instead…

The way the data has changed- the boring part

The short answer here is that it really hasn’t, but it really has. The semantic data that gets pulled for all of the dialog and gameplay uses follows generally the same format, though it is divided up into two “silos”: Global data, and NPC data.

Global is where most of the more “static” game data would reside. Semantic data describing objects, events, items, etc, would go here. That data does not change all that much. A chair is still a chair, and when you create that item, you don’t need to recreate the data that goes with it, except any unique information, of course. Just link the chair with its information, and you’re good to go.

NPC data is just that. Semantic data that is dedicated to NPCs. This is volatile if your NPCs  die and spawn with information being generated on the fly, or pretty static if your game is more traditional with NPCs that either die and spawn again the same, or for NPCs that are more story-oriented. In those cases, your data isn’t moving a whole lot. Either way, the separation between the two silos seemed pretty logical.

Predicates and attributes (from here on out I’m just going to call them Attributes, to be consistent) are housed in their own table, and there is a table for “Core” data, which is mainly used by the editor to store information it needs to run. These two tables are very stable, and don’t get very big at all.

And then there’s two more tables, dealing with text data each for the Global and NPC data tables. Semantic data using text would point to text strings in these tables, along with a localization ID, and pull the correct, localized text for use. You could stick the localization ID column in the semantic data tables as well, but then you’re also duplicating the other data (and the IDs of the semantic data itself then change, which is a much bigger problem to solve). At least this way you can link to the information and localize it independently, and then just add a number and option for that language and it should all work seamlessly.

The not-so-boring parts

So now that we’ve condensed our data to a few tables, we’ve also gained some serious flexibility. Here’s a few things that happen now:

  • Attributes now come with Semantic Gradients: Attributes mean so much more than just Predicates being used as action words for dialog. Attributes describe everything, and because of that, you can do two things with that data.
    • Speak to the game: You can quantify the data to the game in a numeric fashion through enumeration. A texture in the game means much to the player, who understands what an ice texture means, but less to an NPC, who “looks” at the texture and sees nothing currently. Attributes of the object that the NPC looks at can tell the NPC that the object has a Slippery Attribute of 1.0, which on a scale of -1.0 to 1.0 is pretty slippery. The NPC can then use that number in its path finding calculations.
    • Speak to the player, through the game: That same NPC can describe the object to the player in another context as “Slippery” using what is called a Semantic Gradient. I discussed Semantic Gradients here, and talked about how they were used to describe the personality traits of the NPCs (which are described for the AI in terms of -1 to 1 scales). The traits have gradients assigned to them that the NPCs can access and, using fuzzy logic, pick the closest word that reflects the numeric value. Here, 1.0 is called “Slippery” in the Slippery Attribute’s Semantic Gradient, and that’s what’s used. Other Attributes that can be used with Semantic Gradients for NPCs (or other descriptive purposes) would be things such as Softness, Roughness, Hardness, Flexibility, Transparency, etc. Whatever you need, you can create an Attribute for, and assign a Semantic Gradient of adjectives to, and that is your enumerated vocabulary that the AI can use when telling the player, instead of using a number.
  • NPCs can store just about any Attribute you think up
    • Within the semantic data format, you can store strings of data that get parsed according to what kind of Attribute it is tagged to be. The BaseTraits Attribute will tell you that you’ve got a set of 17 floats to parse that make up the base traits of the NPC. You can create an Attribute that is a pointer to items for your inventory system, or other custom Attributes that get parsed in whatever way you need.
  • You can now have multiple sets of dialog templates!
    • I went over the ability to run your data through dialog templates, so that you can fill in the blanks and let the NPCs talk in specific ways. One of the limitations of this was that there was really just one set of templates pegged to the Attributes for dialog actions, which was rather limited. With the expanded range of Attributes, you can now assign the NPC a set of dialog templates, in whole or in part, so that you are not limited to one set of generic templates for all of your characters. Take that a step further, and you can create sets of templates for emotional states, situations, etc. It’s a far more flexible system, and replaces the need for marking up your templates to parse for “flavor text” or any additional processing that may or may not be worth it, from a content perspective. Your narrative designers will thank you for this.
  • Knowledge representation is customizable now
    • So, the prior data structure leaned heavily on the category-level way of categorizing the information, and that worked well. However, there are times when you want an NPC to know fragments of information in a category that do not correspond to one “level” of knowledge. The Object Knowledge Level tags the semantic data itself with a knowledge level, which you can then use to more quickly query the database. Using that column, you can also assign knowledge to the NPC at a more granular level when you need to, for NPCs that have an incomplete or uneven knowledge of a subject.
    • In addition to the above, you can also use the Attribute that allows for the category-level method, which is still fully supported. As before, you need to parse the string representation of the knowledge and sort it, but you can now use the above method first as a faster method before resorting to this, if you find this too generic or too slow.
    • But wait, there’s more! Use a custom Attribute to assign knowledge in a way that works best for you, then parse and use it as you see fit! Mix and match if you want. The knowledge representation field is a string, and so you can put whatever you like in there.

Closing thoughts

Heh, I actually didn’t think I had as much to talk about here. As you can imagine, the above changes have made for changes being needed in the tool itself, which is a good thing, because I’m now able to get rid of a few of the lists and condense things into a more user-friendly UI. Also being added is a dialog template editor, since that is now a bigger part of the tech now.

Having booked my flight and GDC tickets, I’m now available for meetings, so feel free to email me about contracts, licensing, and more. Demos will be available by GDC, and will also be posted here on the site.

Next Blog: Probably talking about the new editor look and feel, and the demo, as those are the main focus right now. Probably some more options I see possible with the new data structures, and I’ll probably outline some of the Attributes available to you when using this system.

The post Updating the data structures for Interrogative appeared first on BablBrain.

]]>
224
Lying and bad memory for your NPCs http://www.bablbrain.com/2015/12/03/lying-npcs/ http://www.bablbrain.com/2015/12/03/lying-npcs/#comments Thu, 03 Dec 2015 21:06:37 +0000 http://www.digitalflux.com/?p=218 People tell lies all of the time… We’ve all told lies. From when you were a little kid trying to avoid punishment to the other day…when you were trying to avoid punishment. Lying is mostly about avoiding something that is harder. Avoiding judgement, punishment, awkward conversations, or more serious confrontations. Lies have a good deal […]

The post Lying and bad memory for your NPCs appeared first on BablBrain.

]]>

People tell lies all of the time…

We’ve all told lies. From when you were a little kid trying to avoid punishment to the other day…when you were trying to avoid punishment. Lying is mostly about avoiding something that is harder. Avoiding judgement, punishment, awkward conversations, or more serious confrontations. Lies have a good deal of utility. Manipulating the knowledge you have can gain you an advantage over others, or mitigate certain circumstances. It is so prevalent in human society that most religious, cultural, and political power structures have varying amounts of laws, rules, and norms dedicated to when and why it is appropriate or inappropriate to lie.

Obviously, if we’re working on a conversational AI, we’re going to want to look at this behavior!

Lying for fun and profit…

Lying is a deceptively difficult subject, however. First, you need to determine when and why a character may want to lie. The “when” is not too difficult: We can make lying an action and then use Utility Theory or some other way of gauging if lying gets an NPC to their goal better than other actions, such as using the truth. The more difficult part is the “why”. You can probably still stick to the variety of ways to get to this, including scripting the behavior, but here’s a list of the generalized circumstances under which people lie:

  • Avoiding a confrontation: A character may lie about knowledge of something to simply let the person asking go on their way as a low-cost method of avoiding a more confrontational situation.
  • Avoiding a punishment: Denying something is a good choice if you don’t want to be punished for it. You can even tell a lie on behalf of someone else to help them avoid punishment.
  • Helping someone (or yourself): People lie on their resumes, or about people when introducing them to others. Spies lie about themselves all of the time. Nepotism often involves lying to make the installation of a favorite into an undeserved position go more smoothly.
  • Hurting someone: Pretty much the opposite of the above, people lie in order to prevent someone from getting a job, to hurt their social standing, to prevent them from achieving a goal.

The reasons above can be scripted or you can use whatever method you think best to figure out just which situation lying is best considered for. Lie to the player too often, and they won’t trust anyone. Lie too little, and it doesn’t make a difference.

There’s just one problem…

With Interrogative, the data is quite granular, often consisting of specific predicates such as color, weight, size, scale, etc. And while you can look at those predicates and know that most lies are simple and relate to only one or two predicates or attributes, the problem is in how to manipulate that data so that it is a reliable lie.

The methods above talked about using methods like Utility Theory to figure out if a lie had more utility than telling the truth- but in reality, you’re going to want to check more than one lie. For example, if you’ve been caught by Sauron, and he wants to know who has the One Ring, it’s probably just as bad to say that Sam has it than to tell the truth (since Sam is always with Frodo). The good news is that only a few checks are needed to come up with a rational lie that points to someone other than Frodo or Sam, and lures the Dark Lord somewhere else.

So, once you can figure out when and why to lie, and find a manipulation of data that is most favorable for the action, you’re ready to actually manipulate the data. And that brings its own set of challenges- and advantages.

Semantic Gradients to the rescue!

A Semantic Gradient is basically a list of words or terms that act much the same as a Color Gradient, in that it transitions from a beginning state to an end state. The following Semantic Gradient is the list of terms that describe the Dominance trait from least dominant to most dominant in the Interrogative AI framework:

Feeble
Subservient
Submissive
Docile
Yielding
Obedient
Deferential
Accommodating
Cooperative
Humble
Neutral
Ambitious
Competitive
Stubborn
Assertive
Bossy
Aggressive
Overbearing
Forceful
Coercive
Oppressive

A Semantic Gradient does not need have a minimum number of entries, or even have to contain text. The concept here is that we have a spectrum of values that we can work with, and this spectrum does not need to be linear. It should, however, be a consistent set of data, so don’t mix ints, floats, and strings.

To make another example from the world of Lord of the Rings, Sauron had initially been defeated in a previous war, and had presented himself to his victors as being much different than his “Oppressive” self. He pretended to be “Deferential” in his actions, and concealed his true nature until he had fashioned the Rings of Power and brought the empire of Numenor crashing down. If there were a Sauron NPC reflecting this time period, and it was asked how dominant it was, it certainly would not pick the Oppressive term, which is accurate and truthful, but the Deferential term, which is what Sauron was masquerading as. Having these terms available in a format that conforms with the mathematics of the representation of the NPC’s personality is helpful here.

The real work is in having the predicates, attributes, adjectives, and other data points available to use in these situations. As a designer, you’ll need to know what your NPCs can talk about, and what they can lie about (if you’re restricting them to lying about certain things), and then build that data accordingly. And like the gradient above, it’s best to be able to reference your gradient with a particular predicate or attribute, and treat it like a data type so that it becomes reusable.

Colors are best represented as RGB, HSV, CMYK, etc, so you can create a function that can manipulate those data structures. The same goes with numerical data, which are sometimes as easy as multiplying or dividing by some multiple that represents how much of a lie or manipulation is being applied.

The hardest data type to address here is text. I would recommend using dialog templates and filling in attribute values from Semantic Gradients so that lies simply use that template and manipulate the data instead of trying to write permutations on sentences. You’ll find yourself drowning in work right away.

The extra feature this affords us…

One of the benefits of being able to manipulate data is that doing this allows you to also implement characters that exaggerate or understate things, or characters who simply have bad memories. Either randomly placed, or managed by some simple logic, the old forgetful gnomish characters are suddenly a lot more realistic.

This effect can also be applied selectively with logic to represent the effects of hypnotism, amnesia, etc. It’s just another tool in the toolkit for designers to bring their characters to life!

Concluding…

This is a high-level view of making NPCs lie, and the feature is very much a work in progress for Interrogative. The feature itself will likely not be ready in time for the version 3.0 release, but is being developed with an eye towards 3.5 or 4.0. I’m working on adding a feature road map to the website much like Unity has, so that the progress of features is updated as the versions progress.

Next blog: Changing data structures to accommodate new features, and thoughts about data in general… 

The post Lying and bad memory for your NPCs appeared first on BablBrain.

]]>
http://www.bablbrain.com/2015/12/03/lying-npcs/feed/ 1 218
Current works http://www.bablbrain.com/2015/10/22/current-works/ Thu, 22 Oct 2015 14:27:30 +0000 http://www.digitalflux.com/?p=196 What have I been up to lately? Working on Interrogative! A lot of the work lately has not been the cool AI stuff, but rather futzing around with the tool and getting other things working for being able to post demos. With current browsers shunning NPAPI plugins, Unity’s webplayer is no longer a feasible way […]

The post Current works appeared first on BablBrain.

]]>
What have I been up to lately?

Working on Interrogative!

A lot of the work lately has not been the cool AI stuff, but rather futzing around with the tool and getting other things working for being able to post demos. With current browsers shunning NPAPI plugins, Unity’s webplayer is no longer a feasible way to put up demos, and their WebGL target is having issues with the SQLite plugin I’m using to store data for Interrogative’s use. And with Flash being shunned by, well, everyone, I begin running out of choices for how to get demos up on this site. This is frustrating, because I’ve had a demo since before March…

I try not to dwell on that, though I’m looking for a suitable way to get that task done, and in the meantime, I’ve abstracted out the MySQL layer for the tool so that it could also be used with Unity (the tool is written in C#) when I start selling it in the Asset Store. I may also have an implementation for one of the SQLite plugins out there as well. These data access layers will be unsupported, but I’ve put the time into trying to get them a bit polished so that they’re not inefficient (maybe not the fastest, but not slow either). Running a test for basic queries, I can do 1000 in 0.2166616 seconds on my dev laptop with normal tasks running. Unsurprisingly, you can run 5000 of those queries in 1.0138701 seconds. That’s not bad performance for MySQL, even if the laptop is pretty hefty (8-core i7 with 16GB RAM and two SSDs).

Something else that happened was the delaying of a feature. One of the things that I’ve wanted to do is to have a user select some answers to “moral questions”- generally stating how likely an NPC would be to react in certain ways, and then be able to randomize personality traits based on those answers. Long story short, there’s a lot more moving parts for that to happen without taking a long time to debug the results, and since I have bigger fish to fry, this feature is being lumped in with a feature set that is slated to come to Interrogative in the next version: Parametric generation of NPCs. It’s basic to that functionality, so it’s delay is actually sensible.

And what about those bigger fish that need frying?

So, some of the things that are still needing to be knocked down for Interrogative are (this list is subject to change, with tasks being added, removed, delayed, etc):

  • I’ve moved the conversation testing GUI to its own tab, and need to revamp how conversations can be tested. I’m going to be pulling in some basic NLP stuff I have from the Unity demo into the tool so that you can type out your queries and statements, in addition to buttons and drop-down lists. The NLP I’m using for the front-end part (not being used by the NPCs) needs a bit of touching up, but I’m hoping to get most basic typing covered.
  • For testing purposes, I need to implement a TraitModifier “stack” so that TraitModifiers can be added and removed during the conversation for testing purposes. Previously, it was pretty clunky and limited, so I need to improve that.
  • For conversation functionality, there’s a few things that I’m looking to implement:
    • NPC statement responses: That’s a deceptively complex task heading, as it encompasses a lot of things. The NPC has to know what’s being said (semantically), understand what it means- generically- to him/her/it, and respond according to their personality. This goes well beyond threats and yo-momma jokes, into more subtle stuff. Right now, I’m covering less subtle statements, since the more-subtle meanings of statements will take months to wrangle and get into a future version of Interrogative (but it will happen).
    • Flavor text within Dialog Templates: Right now, Dialog Templates are pretty straight-forward, and I’d like to shake that up by using inputs from the NPCs personality to be able to generate snippets of “flavor text” that can be inserted into the dialog to make it more personalized towards the NPC’s personality. As with Dialog Templates, this will be a customizable thing.
    • NPCs interacting with each other in a conversation: As a first pass, this will be limited. Ask two NPCs a question, and you can get two answers. Depending on their personalities, however, you may want to have an NPC correct the incorrect answer of the other NPC, or have them agree with the other. Conversation is very much turn-based, by nature, and so it’s not that hard (conceptually) for the NPC that goes second/third/etc to simply agree with another NPC’s answer, if it is the answer they would have given. It’s also a good place to insert that flavor text!
    • Incomplete information model: This is more complex. Knowledge, previously explained, is assigned to the NPC via knowledge tags and knowledge levels, giving it a certain “depth” of knowledge for a certain category. However, this simplistic way of doing things is not the only way, and certainly not the most powerful. As an option, for those creating more narrative-heavy games, for example, another way of assigning knowledge to NPCs would be to represent knowledge levels at the object and attribute/predicate level, so that NPCs can have in-depth knowledge of some aspects of an object, while not know anything about other aspects. This is going to come in extremely handy when more advanced versions of things like Opinions get implemented.
  • And once the above technical things are done, it’ll be time to write out manuals, tutorials, record videos, export DLLs for C++ and get other things done for the product launch.

Reading all of this, it may seem like an overly-complex AI system, but in reality, the code is actually pretty simple- and getting it that way is what takes a lot of time. Trying to keep the database calls relatively fast, keep the math light, and balancing that with the need to not make the implementation too specific to a data storage solution or engine makes this much harder than if I was simply implementing a custom feature for a single game.

Next blog…

I didn’t want to talk about everything here, but I touched on a few advanced features that I’m planning and designing, and they bear mentioning in the blog. Advancing the Opinions feature is a big deal (big enough that it may replace TraitModifiers, requires the incomplete information model, and may spur some low-level database structure changes). But what I’ll probably talk about next is how to get your NPC to lie, exaggerate, under-report, and talk in vague relative terms, as we all do. Sounds simple- and it actually is…

The post Current works appeared first on BablBrain.

]]>
196