Animation Jobs : How to simulate realistic human activity using LLMs #348
-
Hello, But I'm a bit stuck with the Animator because I'm not sure whether the documentation applies to the new version of the project (Ghost API, which normally contains the Animator). For example, I can't find the content templates as in the GHOST-ANIMATOR project. Of course, I've already tried adding the folders myself in my configuration, then downloading Ollama and creating and running the templates, but it doesn't work. Do I need to configure something more? Otherwise, when I've tried the /animations/output command for chat, there's an error message telling me Do you have maybe a tutorial that allow to perform the animation jobs ? Thank you in advance. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hi sorry for delay in answering. So, there is a lot to answer here, but let me start by saying Animator is probably beta software and so there are some dragons ahead — although it does work, but typically it requires me to be doing some tinkering behind the scenes. The content templates and such have been moved to a soon to be released separate API we're calling Shadows that interfaces with Ollama and that the ghosts code here calls. We do that to simplify things, but also to logically separate some concerns. I suspect there will be lots of setup/management with Ollama and its hardware, and so that is a big reason for the separation as well. Of all the functions, Chat is probably very rough, and requires a chat server, etc. to be setup a certain way. The post url is socializer.com, which is a docker container product here that simulates social media interactions. Email me directly with more questions, or follow up here and I'll try to help, but know that we are actively trying to simplify and document this stuff over this summer. |
Beta Was this translation helpful? Give feedback.
Hi sorry for delay in answering.
So, there is a lot to answer here, but let me start by saying Animator is probably beta software and so there are some dragons ahead — although it does work, but typically it requires me to be doing some tinkering behind the scenes.
The content templates and such have been moved to a soon to be released separate API we're calling Shadows that interfaces with Ollama and that the ghosts code here calls. We do that to simplify things, but also to logically separate some concerns. I suspect there will be lots of setup/management with Ollama and its hardware, and so that is a big reason for the separation as well.
Of all the functions, Chat is probably very rou…