Replies: 4 comments 6 replies
-
Dude super cool! I’ll make an account today Edit: Tried just now but worldserver might be down it seems |
Beta Was this translation helpful? Give feedback.
-
a bit of a progress update for providing more context to the conversation including Faction, Race, Class, Level, Spec of player and Responding AI, this influences the response based on the chosen personality. for example if its a rude/joking demenor it could poke fun at you being a lower level then the Bot or differences in class, race/Faction some bugs to iron out. but it takes some data into perspective when responding to requests to give some context about Location, race, class, faction etc |
Beta Was this translation helpful? Give feedback.
-
Gota Add more Context and test some different models out as well as shortening the messages. but its looking good so far =) |
Beta Was this translation helpful? Give feedback.
-
getting a little bit better with context and responses Scenario 1 - Level/Faction Difference:
Scenario 2 - Combat Status:
Player (Level 80 Blood Elf Paladin, ): "nice tanking in that last raid"
Player (Level 80 Human Mage, ): "get out of our territory horde scum"
|
Beta Was this translation helpful? Give feedback.
-
Hey everyone!
I’m looking for testers for a module I’ve been working on. It integrates with player bots using a simple LLM setup powered by Ollama, designed to run on a local machine. Right now, it’s using a 1B model, a 3b or 8b is preferable (If you have access to a GPU)
The cool thing is that the 1B model is currently running on CPU, so no GPU is required to use the module. I’m running it on a 6-core server with 24GB of RAM, and it performs well. For reference, the 1B model requires about 8–10GB of memory to run.
The goal is to make the bot responses feel more human-like.
The configuration offers a lot of flexibility. You can:
Adjust how many player bots respond to each input. (Currently, 2–3 bots respond, with a 75% chance of replying and a 25% chance of ignoring.)
Customize the personalities of the bots to make their responses more varied and unique.
If no response is generated, the system falls back to the player bots’ prebuilt responses.
If you’re interested in testing it out, you can download the module from my GitLab server here:
https://gitlab.realsoftgames.win/krazor/mod_llm_chat
Let me know if you have any feedback or ideas to improve it!
also dont forget to install ollama and install the default model with
ollama pull krith/meta-llama-3.2-1b-instruct-uncensored:IQ3_M
Self Hosted Gitlab
Want to test the module running on my WOTLK Server?
Website Registration
Game version: 3.3.5a
set realmlist realsoftgames.ddns.net
Beta Was this translation helpful? Give feedback.
All reactions